2025-06-22 19:08:33.730572 | Job console starting 2025-06-22 19:08:33.743916 | Updating git repos 2025-06-22 19:08:33.801129 | Cloning repos into workspace 2025-06-22 19:08:34.007868 | Restoring repo states 2025-06-22 19:08:34.037699 | Merging changes 2025-06-22 19:08:34.037719 | Checking out repos 2025-06-22 19:08:34.296331 | Preparing playbooks 2025-06-22 19:08:34.971835 | Running Ansible setup 2025-06-22 19:08:40.225827 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-22 19:08:40.924714 | 2025-06-22 19:08:40.924843 | PLAY [Base pre] 2025-06-22 19:08:40.940495 | 2025-06-22 19:08:40.940602 | TASK [Setup log path fact] 2025-06-22 19:08:40.959307 | orchestrator | ok 2025-06-22 19:08:40.975748 | 2025-06-22 19:08:40.975867 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-22 19:08:41.004451 | orchestrator | ok 2025-06-22 19:08:41.015914 | 2025-06-22 19:08:41.016028 | TASK [emit-job-header : Print job information] 2025-06-22 19:08:41.060595 | # Job Information 2025-06-22 19:08:41.060825 | Ansible Version: 2.16.14 2025-06-22 19:08:41.060897 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-06-22 19:08:41.060967 | Pipeline: post 2025-06-22 19:08:41.061020 | Executor: 521e9411259a 2025-06-22 19:08:41.061053 | Triggered by: https://github.com/osism/testbed/commit/18778fb5188c17e12df2cbfca8eeddeff314e785 2025-06-22 19:08:41.061086 | Event ID: 46e49494-4f9c-11f0-8fdb-d9b8f50935e9 2025-06-22 19:08:41.071780 | 2025-06-22 19:08:41.071905 | LOOP [emit-job-header : Print node information] 2025-06-22 19:08:41.180424 | orchestrator | ok: 2025-06-22 19:08:41.180618 | orchestrator | # Node Information 2025-06-22 19:08:41.180662 | orchestrator | Inventory Hostname: orchestrator 2025-06-22 19:08:41.180746 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-22 19:08:41.180775 | orchestrator | Username: zuul-testbed06 2025-06-22 19:08:41.180797 | orchestrator | Distro: Debian 12.11 2025-06-22 19:08:41.180822 | orchestrator | Provider: static-testbed 2025-06-22 19:08:41.180843 | orchestrator | Region: 2025-06-22 19:08:41.180865 | orchestrator | Label: testbed-orchestrator 2025-06-22 19:08:41.180886 | orchestrator | Product Name: OpenStack Nova 2025-06-22 19:08:41.180906 | orchestrator | Interface IP: 81.163.193.140 2025-06-22 19:08:41.201679 | 2025-06-22 19:08:41.201792 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-22 19:08:41.623953 | orchestrator -> localhost | changed 2025-06-22 19:08:41.642947 | 2025-06-22 19:08:41.643279 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-22 19:08:42.582922 | orchestrator -> localhost | changed 2025-06-22 19:08:42.596620 | 2025-06-22 19:08:42.596715 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-22 19:08:42.831751 | orchestrator -> localhost | ok 2025-06-22 19:08:42.838900 | 2025-06-22 19:08:42.838996 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-22 19:08:42.867231 | orchestrator | ok 2025-06-22 19:08:42.882478 | orchestrator | included: /var/lib/zuul/builds/26d6f32dffdd486a882d9dd5a6805904/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-22 19:08:42.889958 | 2025-06-22 19:08:42.890036 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-22 19:08:44.430608 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-22 19:08:44.430898 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/26d6f32dffdd486a882d9dd5a6805904/work/26d6f32dffdd486a882d9dd5a6805904_id_rsa 2025-06-22 19:08:44.430947 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/26d6f32dffdd486a882d9dd5a6805904/work/26d6f32dffdd486a882d9dd5a6805904_id_rsa.pub 2025-06-22 19:08:44.430974 | orchestrator -> localhost | The key fingerprint is: 2025-06-22 19:08:44.431002 | orchestrator -> localhost | SHA256:/biZDdh26hKebL9HdMtfUTAGx2LLbfupUtSYgCVNeZU zuul-build-sshkey 2025-06-22 19:08:44.431026 | orchestrator -> localhost | The key's randomart image is: 2025-06-22 19:08:44.431061 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-22 19:08:44.431083 | orchestrator -> localhost | | .=ooo*o.| 2025-06-22 19:08:44.431106 | orchestrator -> localhost | | ..+o+.E.| 2025-06-22 19:08:44.431126 | orchestrator -> localhost | | oo++ .| 2025-06-22 19:08:44.431147 | orchestrator -> localhost | | . +++o | 2025-06-22 19:08:44.431167 | orchestrator -> localhost | | S ...+ o.| 2025-06-22 19:08:44.431208 | orchestrator -> localhost | | .o o..+ .| 2025-06-22 19:08:44.431231 | orchestrator -> localhost | | o.o=.+ oo| 2025-06-22 19:08:44.431251 | orchestrator -> localhost | | *. X. .o| 2025-06-22 19:08:44.431271 | orchestrator -> localhost | | . +Ooo.. | 2025-06-22 19:08:44.431291 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-22 19:08:44.431360 | orchestrator -> localhost | ok: Runtime: 0:00:01.084661 2025-06-22 19:08:44.439394 | 2025-06-22 19:08:44.439508 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-22 19:08:44.472569 | orchestrator | ok 2025-06-22 19:08:44.482971 | orchestrator | included: /var/lib/zuul/builds/26d6f32dffdd486a882d9dd5a6805904/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-22 19:08:44.492439 | 2025-06-22 19:08:44.492544 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-22 19:08:44.520281 | orchestrator | skipping: Conditional result was False 2025-06-22 19:08:44.537665 | 2025-06-22 19:08:44.537860 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-22 19:08:45.116846 | orchestrator | changed 2025-06-22 19:08:45.123382 | 2025-06-22 19:08:45.123492 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-22 19:08:45.416570 | orchestrator | ok 2025-06-22 19:08:45.426361 | 2025-06-22 19:08:45.426501 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-22 19:08:45.854639 | orchestrator | ok 2025-06-22 19:08:45.862042 | 2025-06-22 19:08:45.862183 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-22 19:08:46.243252 | orchestrator | ok 2025-06-22 19:08:46.254732 | 2025-06-22 19:08:46.254919 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-22 19:08:46.280708 | orchestrator | skipping: Conditional result was False 2025-06-22 19:08:46.294182 | 2025-06-22 19:08:46.294395 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-22 19:08:46.744848 | orchestrator -> localhost | changed 2025-06-22 19:08:46.758895 | 2025-06-22 19:08:46.759016 | TASK [add-build-sshkey : Add back temp key] 2025-06-22 19:08:47.102735 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/26d6f32dffdd486a882d9dd5a6805904/work/26d6f32dffdd486a882d9dd5a6805904_id_rsa (zuul-build-sshkey) 2025-06-22 19:08:47.106346 | orchestrator -> localhost | ok: Runtime: 0:00:00.012341 2025-06-22 19:08:47.123658 | 2025-06-22 19:08:47.123783 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-22 19:08:47.575684 | orchestrator | ok 2025-06-22 19:08:47.585113 | 2025-06-22 19:08:47.585265 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-22 19:08:47.620244 | orchestrator | skipping: Conditional result was False 2025-06-22 19:08:47.671470 | 2025-06-22 19:08:47.671608 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-22 19:08:48.063317 | orchestrator | ok 2025-06-22 19:08:48.077654 | 2025-06-22 19:08:48.077779 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-22 19:08:48.128507 | orchestrator | ok 2025-06-22 19:08:48.142686 | 2025-06-22 19:08:48.142984 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-22 19:08:48.473584 | orchestrator -> localhost | ok 2025-06-22 19:08:48.488714 | 2025-06-22 19:08:48.488874 | TASK [validate-host : Collect information about the host] 2025-06-22 19:08:49.674566 | orchestrator | ok 2025-06-22 19:08:49.690303 | 2025-06-22 19:08:49.690444 | TASK [validate-host : Sanitize hostname] 2025-06-22 19:08:49.770710 | orchestrator | ok 2025-06-22 19:08:49.779020 | 2025-06-22 19:08:49.779159 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-22 19:08:50.375006 | orchestrator -> localhost | changed 2025-06-22 19:08:50.386954 | 2025-06-22 19:08:50.387087 | TASK [validate-host : Collect information about zuul worker] 2025-06-22 19:08:50.840127 | orchestrator | ok 2025-06-22 19:08:50.845909 | 2025-06-22 19:08:50.846027 | TASK [validate-host : Write out all zuul information for each host] 2025-06-22 19:08:51.429610 | orchestrator -> localhost | changed 2025-06-22 19:08:51.441002 | 2025-06-22 19:08:51.441129 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-22 19:08:51.753878 | orchestrator | ok 2025-06-22 19:08:51.760433 | 2025-06-22 19:08:51.760561 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-22 19:09:29.365248 | orchestrator | changed: 2025-06-22 19:09:29.365556 | orchestrator | .d..t...... src/ 2025-06-22 19:09:29.365628 | orchestrator | .d..t...... src/github.com/ 2025-06-22 19:09:29.365675 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-22 19:09:29.365719 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-22 19:09:29.365760 | orchestrator | RedHat.yml 2025-06-22 19:09:29.387492 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-22 19:09:29.387522 | orchestrator | RedHat.yml 2025-06-22 19:09:29.387620 | orchestrator | = 1.53.0"... 2025-06-22 19:09:42.299816 | orchestrator | 19:09:42.299 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-06-22 19:09:42.376561 | orchestrator | 19:09:42.376 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-06-22 19:09:43.704343 | orchestrator | 19:09:43.704 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-22 19:09:45.175520 | orchestrator | 19:09:45.175 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-22 19:09:46.440787 | orchestrator | 19:09:46.440 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-22 19:09:47.591022 | orchestrator | 19:09:47.590 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-22 19:09:48.823160 | orchestrator | 19:09:48.822 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.2.0... 2025-06-22 19:09:50.187940 | orchestrator | 19:09:50.187 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.2.0 (signed, key ID 4F80527A391BEFD2) 2025-06-22 19:09:50.188225 | orchestrator | 19:09:50.188 STDOUT terraform: Providers are signed by their developers. 2025-06-22 19:09:50.188242 | orchestrator | 19:09:50.188 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-22 19:09:50.188248 | orchestrator | 19:09:50.188 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-22 19:09:50.188503 | orchestrator | 19:09:50.188 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-22 19:09:50.188517 | orchestrator | 19:09:50.188 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-22 19:09:50.188524 | orchestrator | 19:09:50.188 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-22 19:09:50.188529 | orchestrator | 19:09:50.188 STDOUT terraform: you run "tofu init" in the future. 2025-06-22 19:09:50.189181 | orchestrator | 19:09:50.189 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-22 19:09:50.189519 | orchestrator | 19:09:50.189 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-22 19:09:50.189531 | orchestrator | 19:09:50.189 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-22 19:09:50.189535 | orchestrator | 19:09:50.189 STDOUT terraform: should now work. 2025-06-22 19:09:50.189540 | orchestrator | 19:09:50.189 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-22 19:09:50.189544 | orchestrator | 19:09:50.189 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-22 19:09:50.189550 | orchestrator | 19:09:50.189 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-22 19:09:50.289378 | orchestrator | 19:09:50.289 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-06-22 19:09:50.289518 | orchestrator | 19:09:50.289 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-06-22 19:09:50.498046 | orchestrator | 19:09:50.497 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-22 19:09:50.498090 | orchestrator | 19:09:50.497 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-22 19:09:50.498103 | orchestrator | 19:09:50.497 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-22 19:09:50.498109 | orchestrator | 19:09:50.498 STDOUT terraform: for this configuration. 2025-06-22 19:09:50.649280 | orchestrator | 19:09:50.647 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-06-22 19:09:50.649336 | orchestrator | 19:09:50.647 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-06-22 19:09:50.769380 | orchestrator | 19:09:50.769 STDOUT terraform: ci.auto.tfvars 2025-06-22 19:09:50.771526 | orchestrator | 19:09:50.771 STDOUT terraform: default_custom.tf 2025-06-22 19:09:50.892096 | orchestrator | 19:09:50.891 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed06/terraform` instead. 2025-06-22 19:09:51.918071 | orchestrator | 19:09:51.914 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-22 19:09:52.440647 | orchestrator | 19:09:52.440 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-22 19:09:52.711062 | orchestrator | 19:09:52.710 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-22 19:09:52.711132 | orchestrator | 19:09:52.711 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-22 19:09:52.711139 | orchestrator | 19:09:52.711 STDOUT terraform:  + create 2025-06-22 19:09:52.711144 | orchestrator | 19:09:52.711 STDOUT terraform:  <= read (data resources) 2025-06-22 19:09:52.711152 | orchestrator | 19:09:52.711 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-22 19:09:52.711201 | orchestrator | 19:09:52.711 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-22 19:09:52.711233 | orchestrator | 19:09:52.711 STDOUT terraform:  # (config refers to values not yet known) 2025-06-22 19:09:52.711271 | orchestrator | 19:09:52.711 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-22 19:09:52.711304 | orchestrator | 19:09:52.711 STDOUT terraform:  + checksum = (known after apply) 2025-06-22 19:09:52.711338 | orchestrator | 19:09:52.711 STDOUT terraform:  + created_at = (known after apply) 2025-06-22 19:09:52.711372 | orchestrator | 19:09:52.711 STDOUT terraform:  + file = (known after apply) 2025-06-22 19:09:52.711403 | orchestrator | 19:09:52.711 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.711431 | orchestrator | 19:09:52.711 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.711452 | orchestrator | 19:09:52.711 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-22 19:09:52.711486 | orchestrator | 19:09:52.711 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-22 19:09:52.711507 | orchestrator | 19:09:52.711 STDOUT terraform:  + most_recent = true 2025-06-22 19:09:52.711540 | orchestrator | 19:09:52.711 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.711571 | orchestrator | 19:09:52.711 STDOUT terraform:  + protected = (known after apply) 2025-06-22 19:09:52.711601 | orchestrator | 19:09:52.711 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.711630 | orchestrator | 19:09:52.711 STDOUT terraform:  + schema = (known after apply) 2025-06-22 19:09:52.711657 | orchestrator | 19:09:52.711 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-22 19:09:52.711687 | orchestrator | 19:09:52.711 STDOUT terraform:  + tags = (known after apply) 2025-06-22 19:09:52.711717 | orchestrator | 19:09:52.711 STDOUT terraform:  + updated_at = (known after apply) 2025-06-22 19:09:52.711725 | orchestrator | 19:09:52.711 STDOUT terraform:  } 2025-06-22 19:09:52.711777 | orchestrator | 19:09:52.711 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-22 19:09:52.711802 | orchestrator | 19:09:52.711 STDOUT terraform:  # (config refers to values not yet known) 2025-06-22 19:09:52.711839 | orchestrator | 19:09:52.711 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-22 19:09:52.711888 | orchestrator | 19:09:52.711 STDOUT terraform:  + checksum = (known after apply) 2025-06-22 19:09:52.711920 | orchestrator | 19:09:52.711 STDOUT terraform:  + created_at = (known after apply) 2025-06-22 19:09:52.711951 | orchestrator | 19:09:52.711 STDOUT terraform:  + file = (known after apply) 2025-06-22 19:09:52.711981 | orchestrator | 19:09:52.711 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.712012 | orchestrator | 19:09:52.711 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.712043 | orchestrator | 19:09:52.712 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-22 19:09:52.712075 | orchestrator | 19:09:52.712 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-22 19:09:52.712103 | orchestrator | 19:09:52.712 STDOUT terraform:  + most_recent = true 2025-06-22 19:09:52.712125 | orchestrator | 19:09:52.712 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.712156 | orchestrator | 19:09:52.712 STDOUT terraform:  + protected = (known after apply) 2025-06-22 19:09:52.712209 | orchestrator | 19:09:52.712 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.712218 | orchestrator | 19:09:52.712 STDOUT terraform:  + schema = (known after apply) 2025-06-22 19:09:52.712252 | orchestrator | 19:09:52.712 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-22 19:09:52.712279 | orchestrator | 19:09:52.712 STDOUT terraform:  + tags = (known after apply) 2025-06-22 19:09:52.712308 | orchestrator | 19:09:52.712 STDOUT terraform:  + updated_at = (known after apply) 2025-06-22 19:09:52.712316 | orchestrator | 19:09:52.712 STDOUT terraform:  } 2025-06-22 19:09:52.712375 | orchestrator | 19:09:52.712 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-22 19:09:52.712407 | orchestrator | 19:09:52.712 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-22 19:09:52.712443 | orchestrator | 19:09:52.712 STDOUT terraform:  + content = (known after apply) 2025-06-22 19:09:52.712482 | orchestrator | 19:09:52.712 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 19:09:52.712517 | orchestrator | 19:09:52.712 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 19:09:52.712559 | orchestrator | 19:09:52.712 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 19:09:52.712591 | orchestrator | 19:09:52.712 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 19:09:52.712627 | orchestrator | 19:09:52.712 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 19:09:52.712666 | orchestrator | 19:09:52.712 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 19:09:52.712676 | orchestrator | 19:09:52.712 STDOUT terraform:  + directory_permission = "0777" 2025-06-22 19:09:52.712706 | orchestrator | 19:09:52.712 STDOUT terraform:  + file_permission = "0644" 2025-06-22 19:09:52.712740 | orchestrator | 19:09:52.712 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-22 19:09:52.712777 | orchestrator | 19:09:52.712 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.712784 | orchestrator | 19:09:52.712 STDOUT terraform:  } 2025-06-22 19:09:52.712817 | orchestrator | 19:09:52.712 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-22 19:09:52.712845 | orchestrator | 19:09:52.712 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-22 19:09:52.712880 | orchestrator | 19:09:52.712 STDOUT terraform:  + content = (known after apply) 2025-06-22 19:09:52.712918 | orchestrator | 19:09:52.712 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 19:09:52.712952 | orchestrator | 19:09:52.712 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 19:09:52.712989 | orchestrator | 19:09:52.712 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 19:09:52.713023 | orchestrator | 19:09:52.712 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 19:09:52.713058 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 19:09:52.713093 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 19:09:52.713116 | orchestrator | 19:09:52.713 STDOUT terraform:  + directory_permission = "0777" 2025-06-22 19:09:52.713141 | orchestrator | 19:09:52.713 STDOUT terraform:  + file_permission = "0644" 2025-06-22 19:09:52.713191 | orchestrator | 19:09:52.713 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-22 19:09:52.713216 | orchestrator | 19:09:52.713 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.713224 | orchestrator | 19:09:52.713 STDOUT terraform:  } 2025-06-22 19:09:52.713254 | orchestrator | 19:09:52.713 STDOUT terraform:  # local_file.inventory will be created 2025-06-22 19:09:52.713276 | orchestrator | 19:09:52.713 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-22 19:09:52.713313 | orchestrator | 19:09:52.713 STDOUT terraform:  + content = (known after apply) 2025-06-22 19:09:52.713347 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 19:09:52.713381 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 19:09:52.713416 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 19:09:52.713451 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 19:09:52.713486 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 19:09:52.713521 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 19:09:52.713545 | orchestrator | 19:09:52.713 STDOUT terraform:  + directory_permission = "0777" 2025-06-22 19:09:52.713569 | orchestrator | 19:09:52.713 STDOUT terraform:  + file_permission = "0644" 2025-06-22 19:09:52.713599 | orchestrator | 19:09:52.713 STDOUT terraform:  + filename = "inventory.ci" 2025-06-22 19:09:52.713637 | orchestrator | 19:09:52.713 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.713644 | orchestrator | 19:09:52.713 STDOUT terraform:  } 2025-06-22 19:09:52.713676 | orchestrator | 19:09:52.713 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-22 19:09:52.713709 | orchestrator | 19:09:52.713 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-22 19:09:52.713741 | orchestrator | 19:09:52.713 STDOUT terraform:  + content = (sensitive value) 2025-06-22 19:09:52.713773 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 19:09:52.713811 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 19:09:52.713846 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 19:09:52.713881 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 19:09:52.713915 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 19:09:52.713950 | orchestrator | 19:09:52.713 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 19:09:52.713973 | orchestrator | 19:09:52.713 STDOUT terraform:  + directory_permission = "0700" 2025-06-22 19:09:52.713998 | orchestrator | 19:09:52.713 STDOUT terraform:  + file_permission = "0600" 2025-06-22 19:09:52.714049 | orchestrator | 19:09:52.713 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-22 19:09:52.714085 | orchestrator | 19:09:52.714 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.714092 | orchestrator | 19:09:52.714 STDOUT terraform:  } 2025-06-22 19:09:52.714123 | orchestrator | 19:09:52.714 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-22 19:09:52.714152 | orchestrator | 19:09:52.714 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-22 19:09:52.714207 | orchestrator | 19:09:52.714 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.714216 | orchestrator | 19:09:52.714 STDOUT terraform:  } 2025-06-22 19:09:52.714265 | orchestrator | 19:09:52.714 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-22 19:09:52.714317 | orchestrator | 19:09:52.714 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-22 19:09:52.714349 | orchestrator | 19:09:52.714 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.714376 | orchestrator | 19:09:52.714 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.714412 | orchestrator | 19:09:52.714 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.714448 | orchestrator | 19:09:52.714 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.714485 | orchestrator | 19:09:52.714 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.714528 | orchestrator | 19:09:52.714 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-22 19:09:52.714566 | orchestrator | 19:09:52.714 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.714589 | orchestrator | 19:09:52.714 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.714615 | orchestrator | 19:09:52.714 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.714643 | orchestrator | 19:09:52.714 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.714651 | orchestrator | 19:09:52.714 STDOUT terraform:  } 2025-06-22 19:09:52.714706 | orchestrator | 19:09:52.714 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-22 19:09:52.714752 | orchestrator | 19:09:52.714 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:09:52.714788 | orchestrator | 19:09:52.714 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.714810 | orchestrator | 19:09:52.714 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.714850 | orchestrator | 19:09:52.714 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.714885 | orchestrator | 19:09:52.714 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.714920 | orchestrator | 19:09:52.714 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.714964 | orchestrator | 19:09:52.714 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-22 19:09:52.714998 | orchestrator | 19:09:52.714 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.715020 | orchestrator | 19:09:52.714 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.715046 | orchestrator | 19:09:52.715 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.715073 | orchestrator | 19:09:52.715 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.715080 | orchestrator | 19:09:52.715 STDOUT terraform:  } 2025-06-22 19:09:52.715129 | orchestrator | 19:09:52.715 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-22 19:09:52.715188 | orchestrator | 19:09:52.715 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:09:52.715218 | orchestrator | 19:09:52.715 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.715243 | orchestrator | 19:09:52.715 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.715282 | orchestrator | 19:09:52.715 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.715321 | orchestrator | 19:09:52.715 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.715357 | orchestrator | 19:09:52.715 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.715400 | orchestrator | 19:09:52.715 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-22 19:09:52.715435 | orchestrator | 19:09:52.715 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.715459 | orchestrator | 19:09:52.715 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.715485 | orchestrator | 19:09:52.715 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.715511 | orchestrator | 19:09:52.715 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.715518 | orchestrator | 19:09:52.715 STDOUT terraform:  } 2025-06-22 19:09:52.715564 | orchestrator | 19:09:52.715 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-22 19:09:52.715608 | orchestrator | 19:09:52.715 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:09:52.715644 | orchestrator | 19:09:52.715 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.715673 | orchestrator | 19:09:52.715 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.715707 | orchestrator | 19:09:52.715 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.715742 | orchestrator | 19:09:52.715 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.715776 | orchestrator | 19:09:52.715 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.715820 | orchestrator | 19:09:52.715 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-22 19:09:52.715855 | orchestrator | 19:09:52.715 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.715877 | orchestrator | 19:09:52.715 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.715901 | orchestrator | 19:09:52.715 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.715929 | orchestrator | 19:09:52.715 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.715949 | orchestrator | 19:09:52.715 STDOUT terraform:  } 2025-06-22 19:09:52.715997 | orchestrator | 19:09:52.715 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-22 19:09:52.716044 | orchestrator | 19:09:52.715 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:09:52.716078 | orchestrator | 19:09:52.716 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.716103 | orchestrator | 19:09:52.716 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.716140 | orchestrator | 19:09:52.716 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.716191 | orchestrator | 19:09:52.716 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.716225 | orchestrator | 19:09:52.716 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.716279 | orchestrator | 19:09:52.716 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-22 19:09:52.716316 | orchestrator | 19:09:52.716 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.716337 | orchestrator | 19:09:52.716 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.716362 | orchestrator | 19:09:52.716 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.716387 | orchestrator | 19:09:52.716 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.716394 | orchestrator | 19:09:52.716 STDOUT terraform:  } 2025-06-22 19:09:52.716459 | orchestrator | 19:09:52.716 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-22 19:09:52.716507 | orchestrator | 19:09:52.716 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:09:52.716540 | orchestrator | 19:09:52.716 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.716563 | orchestrator | 19:09:52.716 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.716599 | orchestrator | 19:09:52.716 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.716635 | orchestrator | 19:09:52.716 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.716671 | orchestrator | 19:09:52.716 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.716716 | orchestrator | 19:09:52.716 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-22 19:09:52.716751 | orchestrator | 19:09:52.716 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.716771 | orchestrator | 19:09:52.716 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.716797 | orchestrator | 19:09:52.716 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.716822 | orchestrator | 19:09:52.716 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.716828 | orchestrator | 19:09:52.716 STDOUT terraform:  } 2025-06-22 19:09:52.716877 | orchestrator | 19:09:52.716 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-22 19:09:52.716920 | orchestrator | 19:09:52.716 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:09:52.716957 | orchestrator | 19:09:52.716 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.716982 | orchestrator | 19:09:52.716 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.717017 | orchestrator | 19:09:52.716 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.717066 | orchestrator | 19:09:52.717 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.717092 | orchestrator | 19:09:52.717 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.717141 | orchestrator | 19:09:52.717 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-22 19:09:52.717204 | orchestrator | 19:09:52.717 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.717210 | orchestrator | 19:09:52.717 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.717222 | orchestrator | 19:09:52.717 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.717241 | orchestrator | 19:09:52.717 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.717248 | orchestrator | 19:09:52.717 STDOUT terraform:  } 2025-06-22 19:09:52.717294 | orchestrator | 19:09:52.717 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-22 19:09:52.717335 | orchestrator | 19:09:52.717 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.717378 | orchestrator | 19:09:52.717 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.717397 | orchestrator | 19:09:52.717 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.717435 | orchestrator | 19:09:52.717 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.717472 | orchestrator | 19:09:52.717 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.717509 | orchestrator | 19:09:52.717 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-22 19:09:52.717546 | orchestrator | 19:09:52.717 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.717567 | orchestrator | 19:09:52.717 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.717592 | orchestrator | 19:09:52.717 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.717616 | orchestrator | 19:09:52.717 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.717624 | orchestrator | 19:09:52.717 STDOUT terraform:  } 2025-06-22 19:09:52.717698 | orchestrator | 19:09:52.717 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-22 19:09:52.718975 | orchestrator | 19:09:52.717 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.719019 | orchestrator | 19:09:52.718 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.719049 | orchestrator | 19:09:52.719 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.719100 | orchestrator | 19:09:52.719 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.719154 | orchestrator | 19:09:52.719 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.719214 | orchestrator | 19:09:52.719 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-22 19:09:52.719255 | orchestrator | 19:09:52.719 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.719279 | orchestrator | 19:09:52.719 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.719316 | orchestrator | 19:09:52.719 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.719340 | orchestrator | 19:09:52.719 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.719347 | orchestrator | 19:09:52.719 STDOUT terraform:  } 2025-06-22 19:09:52.719408 | orchestrator | 19:09:52.719 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-22 19:09:52.719466 | orchestrator | 19:09:52.719 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.719503 | orchestrator | 19:09:52.719 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.719544 | orchestrator | 19:09:52.719 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.719583 | orchestrator | 19:09:52.719 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.719632 | orchestrator | 19:09:52.719 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.719677 | orchestrator | 19:09:52.719 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-22 19:09:52.719722 | orchestrator | 19:09:52.719 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.719745 | orchestrator | 19:09:52.719 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.719788 | orchestrator | 19:09:52.719 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.719809 | orchestrator | 19:09:52.719 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.719816 | orchestrator | 19:09:52.719 STDOUT terraform:  } 2025-06-22 19:09:52.719878 | orchestrator | 19:09:52.719 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-22 19:09:52.719932 | orchestrator | 19:09:52.719 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.719968 | orchestrator | 19:09:52.719 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.720006 | orchestrator | 19:09:52.719 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.720043 | orchestrator | 19:09:52.720 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.720090 | orchestrator | 19:09:52.720 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.720128 | orchestrator | 19:09:52.720 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-22 19:09:52.720190 | orchestrator | 19:09:52.720 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.720210 | orchestrator | 19:09:52.720 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.720246 | orchestrator | 19:09:52.720 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.720273 | orchestrator | 19:09:52.720 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.720280 | orchestrator | 19:09:52.720 STDOUT terraform:  } 2025-06-22 19:09:52.720340 | orchestrator | 19:09:52.720 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-22 19:09:52.720394 | orchestrator | 19:09:52.720 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.720429 | orchestrator | 19:09:52.720 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.720467 | orchestrator | 19:09:52.720 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.720504 | orchestrator | 19:09:52.720 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.720553 | orchestrator | 19:09:52.720 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.720592 | orchestrator | 19:09:52.720 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-22 19:09:52.720931 | orchestrator | 19:09:52.720 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.720946 | orchestrator | 19:09:52.720 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.720990 | orchestrator | 19:09:52.720 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.721017 | orchestrator | 19:09:52.720 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.721049 | orchestrator | 19:09:52.721 STDOUT terraform:  } 2025-06-22 19:09:52.721095 | orchestrator | 19:09:52.721 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-22 19:09:52.721245 | orchestrator | 19:09:52.721 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.721299 | orchestrator | 19:09:52.721 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.721325 | orchestrator | 19:09:52.721 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.721373 | orchestrator | 19:09:52.721 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.721408 | orchestrator | 19:09:52.721 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.721528 | orchestrator | 19:09:52.721 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-22 19:09:52.721580 | orchestrator | 19:09:52.721 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.721607 | orchestrator | 19:09:52.721 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.721629 | orchestrator | 19:09:52.721 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.721667 | orchestrator | 19:09:52.721 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.721675 | orchestrator | 19:09:52.721 STDOUT terraform:  } 2025-06-22 19:09:52.721936 | orchestrator | 19:09:52.721 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-22 19:09:52.722000 | orchestrator | 19:09:52.721 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.722143 | orchestrator | 19:09:52.721 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.722201 | orchestrator | 19:09:52.722 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.722231 | orchestrator | 19:09:52.722 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.722281 | orchestrator | 19:09:52.722 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.722321 | orchestrator | 19:09:52.722 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-22 19:09:52.722371 | orchestrator | 19:09:52.722 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.722392 | orchestrator | 19:09:52.722 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.722433 | orchestrator | 19:09:52.722 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.722458 | orchestrator | 19:09:52.722 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.722465 | orchestrator | 19:09:52.722 STDOUT terraform:  } 2025-06-22 19:09:52.722526 | orchestrator | 19:09:52.722 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-22 19:09:52.722570 | orchestrator | 19:09:52.722 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.722621 | orchestrator | 19:09:52.722 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.722647 | orchestrator | 19:09:52.722 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.722697 | orchestrator | 19:09:52.722 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.722870 | orchestrator | 19:09:52.722 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.722913 | orchestrator | 19:09:52.722 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-22 19:09:52.722966 | orchestrator | 19:09:52.722 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.722989 | orchestrator | 19:09:52.722 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.723028 | orchestrator | 19:09:52.722 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.723051 | orchestrator | 19:09:52.723 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.723058 | orchestrator | 19:09:52.723 STDOUT terraform:  } 2025-06-22 19:09:52.723123 | orchestrator | 19:09:52.723 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-22 19:09:52.723191 | orchestrator | 19:09:52.723 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.723224 | orchestrator | 19:09:52.723 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.723261 | orchestrator | 19:09:52.723 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.723300 | orchestrator | 19:09:52.723 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.723349 | orchestrator | 19:09:52.723 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.723394 | orchestrator | 19:09:52.723 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-22 19:09:52.723437 | orchestrator | 19:09:52.723 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.723460 | orchestrator | 19:09:52.723 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.723635 | orchestrator | 19:09:52.723 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.723659 | orchestrator | 19:09:52.723 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.723667 | orchestrator | 19:09:52.723 STDOUT terraform:  } 2025-06-22 19:09:52.723727 | orchestrator | 19:09:52.723 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-22 19:09:52.723769 | orchestrator | 19:09:52.723 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-22 19:09:52.723804 | orchestrator | 19:09:52.723 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.723854 | orchestrator | 19:09:52.723 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.723889 | orchestrator | 19:09:52.723 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.723923 | orchestrator | 19:09:52.723 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.723951 | orchestrator | 19:09:52.723 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.723963 | orchestrator | 19:09:52.723 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.724003 | orchestrator | 19:09:52.723 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.724037 | orchestrator | 19:09:52.723 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.724082 | orchestrator | 19:09:52.724 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-22 19:09:52.724106 | orchestrator | 19:09:52.724 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.724140 | orchestrator | 19:09:52.724 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.724201 | orchestrator | 19:09:52.724 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.724211 | orchestrator | 19:09:52.724 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.724250 | orchestrator | 19:09:52.724 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.724278 | orchestrator | 19:09:52.724 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.724309 | orchestrator | 19:09:52.724 STDOUT terraform:  + name = "testbed-manager" 2025-06-22 19:09:52.724333 | orchestrator | 19:09:52.724 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.724367 | orchestrator | 19:09:52.724 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.724404 | orchestrator | 19:09:52.724 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.724432 | orchestrator | 19:09:52.724 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.724650 | orchestrator | 19:09:52.724 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.724686 | orchestrator | 19:09:52.724 STDOUT terraform:  + user_data = (known after apply) 2025-06-22 19:09:52.724694 | orchestrator | 19:09:52.724 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.724725 | orchestrator | 19:09:52.724 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.724754 | orchestrator | 19:09:52.724 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.724785 | orchestrator | 19:09:52.724 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.724816 | orchestrator | 19:09:52.724 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.724841 | orchestrator | 19:09:52.724 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.724882 | orchestrator | 19:09:52.724 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.724889 | orchestrator | 19:09:52.724 STDOUT terraform:  } 2025-06-22 19:09:52.724912 | orchestrator | 19:09:52.724 STDOUT terraform:  + network { 2025-06-22 19:09:52.724920 | orchestrator | 19:09:52.724 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.724968 | orchestrator | 19:09:52.724 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.724987 | orchestrator | 19:09:52.724 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.725020 | orchestrator | 19:09:52.724 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.725051 | orchestrator | 19:09:52.725 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.725082 | orchestrator | 19:09:52.725 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.725113 | orchestrator | 19:09:52.725 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.725120 | orchestrator | 19:09:52.725 STDOUT terraform:  } 2025-06-22 19:09:52.725138 | orchestrator | 19:09:52.725 STDOUT terraform:  } 2025-06-22 19:09:52.725206 | orchestrator | 19:09:52.725 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-22 19:09:52.725247 | orchestrator | 19:09:52.725 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:09:52.725286 | orchestrator | 19:09:52.725 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.725469 | orchestrator | 19:09:52.725 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.725506 | orchestrator | 19:09:52.725 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.725544 | orchestrator | 19:09:52.725 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.725569 | orchestrator | 19:09:52.725 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.725591 | orchestrator | 19:09:52.725 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.725624 | orchestrator | 19:09:52.725 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.725659 | orchestrator | 19:09:52.725 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.725690 | orchestrator | 19:09:52.725 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:09:52.725715 | orchestrator | 19:09:52.725 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.725747 | orchestrator | 19:09:52.725 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.725784 | orchestrator | 19:09:52.725 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.725818 | orchestrator | 19:09:52.725 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.725853 | orchestrator | 19:09:52.725 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.725881 | orchestrator | 19:09:52.725 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.725913 | orchestrator | 19:09:52.725 STDOUT terraform:  + name = "testbed-node-0" 2025-06-22 19:09:52.725937 | orchestrator | 19:09:52.725 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.725977 | orchestrator | 19:09:52.725 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.726031 | orchestrator | 19:09:52.725 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.726055 | orchestrator | 19:09:52.726 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.726088 | orchestrator | 19:09:52.726 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.726137 | orchestrator | 19:09:52.726 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:09:52.726145 | orchestrator | 19:09:52.726 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.726219 | orchestrator | 19:09:52.726 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.726227 | orchestrator | 19:09:52.726 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.726233 | orchestrator | 19:09:52.726 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.726256 | orchestrator | 19:09:52.726 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.726287 | orchestrator | 19:09:52.726 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.726328 | orchestrator | 19:09:52.726 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.726336 | orchestrator | 19:09:52.726 STDOUT terraform:  } 2025-06-22 19:09:52.726354 | orchestrator | 19:09:52.726 STDOUT terraform:  + network { 2025-06-22 19:09:52.726375 | orchestrator | 19:09:52.726 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.726410 | orchestrator | 19:09:52.726 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.726440 | orchestrator | 19:09:52.726 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.726470 | orchestrator | 19:09:52.726 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.726502 | orchestrator | 19:09:52.726 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.726533 | orchestrator | 19:09:52.726 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.726566 | orchestrator | 19:09:52.726 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.726573 | orchestrator | 19:09:52.726 STDOUT terraform:  } 2025-06-22 19:09:52.726581 | orchestrator | 19:09:52.726 STDOUT terraform:  } 2025-06-22 19:09:52.726853 | orchestrator | 19:09:52.726 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-22 19:09:52.726984 | orchestrator | 19:09:52.726 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:09:52.727066 | orchestrator | 19:09:52.727 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.727124 | orchestrator | 19:09:52.727 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.727204 | orchestrator | 19:09:52.727 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.727242 | orchestrator | 19:09:52.727 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.727325 | orchestrator | 19:09:52.727 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.727371 | orchestrator | 19:09:52.727 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.727446 | orchestrator | 19:09:52.727 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.727521 | orchestrator | 19:09:52.727 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.727577 | orchestrator | 19:09:52.727 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:09:52.727626 | orchestrator | 19:09:52.727 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.727705 | orchestrator | 19:09:52.727 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.727801 | orchestrator | 19:09:52.727 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.727835 | orchestrator | 19:09:52.727 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.727893 | orchestrator | 19:09:52.727 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.727921 | orchestrator | 19:09:52.727 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.727992 | orchestrator | 19:09:52.727 STDOUT terraform:  + name = "testbed-node-1" 2025-06-22 19:09:52.728055 | orchestrator | 19:09:52.727 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.728144 | orchestrator | 19:09:52.728 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.728212 | orchestrator | 19:09:52.728 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.728241 | orchestrator | 19:09:52.728 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.728274 | orchestrator | 19:09:52.728 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.728326 | orchestrator | 19:09:52.728 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:09:52.728334 | orchestrator | 19:09:52.728 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.728364 | orchestrator | 19:09:52.728 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.728396 | orchestrator | 19:09:52.728 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.728423 | orchestrator | 19:09:52.728 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.728450 | orchestrator | 19:09:52.728 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.728480 | orchestrator | 19:09:52.728 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.728521 | orchestrator | 19:09:52.728 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.728528 | orchestrator | 19:09:52.728 STDOUT terraform:  } 2025-06-22 19:09:52.728548 | orchestrator | 19:09:52.728 STDOUT terraform:  + network { 2025-06-22 19:09:52.728567 | orchestrator | 19:09:52.728 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.728601 | orchestrator | 19:09:52.728 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.728632 | orchestrator | 19:09:52.728 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.728663 | orchestrator | 19:09:52.728 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.728696 | orchestrator | 19:09:52.728 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.728727 | orchestrator | 19:09:52.728 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.728758 | orchestrator | 19:09:52.728 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.728765 | orchestrator | 19:09:52.728 STDOUT terraform:  } 2025-06-22 19:09:52.728782 | orchestrator | 19:09:52.728 STDOUT terraform:  } 2025-06-22 19:09:52.728825 | orchestrator | 19:09:52.728 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-22 19:09:52.728868 | orchestrator | 19:09:52.728 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:09:52.728905 | orchestrator | 19:09:52.728 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.729098 | orchestrator | 19:09:52.728 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.729140 | orchestrator | 19:09:52.729 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.729212 | orchestrator | 19:09:52.729 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.729220 | orchestrator | 19:09:52.729 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.729225 | orchestrator | 19:09:52.729 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.729261 | orchestrator | 19:09:52.729 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.729296 | orchestrator | 19:09:52.729 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.729326 | orchestrator | 19:09:52.729 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:09:52.729352 | orchestrator | 19:09:52.729 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.729387 | orchestrator | 19:09:52.729 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.729516 | orchestrator | 19:09:52.729 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.729587 | orchestrator | 19:09:52.729 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.729624 | orchestrator | 19:09:52.729 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.729653 | orchestrator | 19:09:52.729 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.729685 | orchestrator | 19:09:52.729 STDOUT terraform:  + name = "testbed-node-2" 2025-06-22 19:09:52.729710 | orchestrator | 19:09:52.729 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.729745 | orchestrator | 19:09:52.729 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.729782 | orchestrator | 19:09:52.729 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.729804 | orchestrator | 19:09:52.729 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.729842 | orchestrator | 19:09:52.729 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.729898 | orchestrator | 19:09:52.729 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:09:52.729921 | orchestrator | 19:09:52.729 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.729949 | orchestrator | 19:09:52.729 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.729971 | orchestrator | 19:09:52.729 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.730005 | orchestrator | 19:09:52.729 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.730045 | orchestrator | 19:09:52.729 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.730077 | orchestrator | 19:09:52.730 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.730114 | orchestrator | 19:09:52.730 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.730129 | orchestrator | 19:09:52.730 STDOUT terraform:  } 2025-06-22 19:09:52.730135 | orchestrator | 19:09:52.730 STDOUT terraform:  + network { 2025-06-22 19:09:52.730161 | orchestrator | 19:09:52.730 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.730214 | orchestrator | 19:09:52.730 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.730246 | orchestrator | 19:09:52.730 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.730281 | orchestrator | 19:09:52.730 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.730313 | orchestrator | 19:09:52.730 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.730343 | orchestrator | 19:09:52.730 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.730375 | orchestrator | 19:09:52.730 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.730382 | orchestrator | 19:09:52.730 STDOUT terraform:  } 2025-06-22 19:09:52.730403 | orchestrator | 19:09:52.730 STDOUT terraform:  } 2025-06-22 19:09:52.730453 | orchestrator | 19:09:52.730 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-22 19:09:52.730495 | orchestrator | 19:09:52.730 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:09:52.730532 | orchestrator | 19:09:52.730 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.730568 | orchestrator | 19:09:52.730 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.730603 | orchestrator | 19:09:52.730 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.730644 | orchestrator | 19:09:52.730 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.730666 | orchestrator | 19:09:52.730 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.730688 | orchestrator | 19:09:52.730 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.730726 | orchestrator | 19:09:52.730 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.730760 | orchestrator | 19:09:52.730 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.730790 | orchestrator | 19:09:52.730 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:09:52.730813 | orchestrator | 19:09:52.730 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.730847 | orchestrator | 19:09:52.730 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.730890 | orchestrator | 19:09:52.730 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.730923 | orchestrator | 19:09:52.730 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.730956 | orchestrator | 19:09:52.730 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.730982 | orchestrator | 19:09:52.730 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.731012 | orchestrator | 19:09:52.730 STDOUT terraform:  + name = "testbed-node-3" 2025-06-22 19:09:52.731038 | orchestrator | 19:09:52.731 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.731072 | orchestrator | 19:09:52.731 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.731109 | orchestrator | 19:09:52.731 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.731127 | orchestrator | 19:09:52.731 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.731179 | orchestrator | 19:09:52.731 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.731224 | orchestrator | 19:09:52.731 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:09:52.731232 | orchestrator | 19:09:52.731 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.731261 | orchestrator | 19:09:52.731 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.731289 | orchestrator | 19:09:52.731 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.731320 | orchestrator | 19:09:52.731 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.731354 | orchestrator | 19:09:52.731 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.731388 | orchestrator | 19:09:52.731 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.731429 | orchestrator | 19:09:52.731 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.731436 | orchestrator | 19:09:52.731 STDOUT terraform:  } 2025-06-22 19:09:52.731442 | orchestrator | 19:09:52.731 STDOUT terraform:  + network { 2025-06-22 19:09:52.731467 | orchestrator | 19:09:52.731 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.731497 | orchestrator | 19:09:52.731 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.731529 | orchestrator | 19:09:52.731 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.731564 | orchestrator | 19:09:52.731 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.731595 | orchestrator | 19:09:52.731 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.731626 | orchestrator | 19:09:52.731 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.731658 | orchestrator | 19:09:52.731 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.731665 | orchestrator | 19:09:52.731 STDOUT terraform:  } 2025-06-22 19:09:52.731681 | orchestrator | 19:09:52.731 STDOUT terraform:  } 2025-06-22 19:09:52.731724 | orchestrator | 19:09:52.731 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-22 19:09:52.731766 | orchestrator | 19:09:52.731 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:09:52.731804 | orchestrator | 19:09:52.731 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.731838 | orchestrator | 19:09:52.731 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.731873 | orchestrator | 19:09:52.731 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.731910 | orchestrator | 19:09:52.731 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.731941 | orchestrator | 19:09:52.731 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.731955 | orchestrator | 19:09:52.731 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.731995 | orchestrator | 19:09:52.731 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.732025 | orchestrator | 19:09:52.731 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.732054 | orchestrator | 19:09:52.732 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:09:52.732078 | orchestrator | 19:09:52.732 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.732115 | orchestrator | 19:09:52.732 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.732157 | orchestrator | 19:09:52.732 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.732207 | orchestrator | 19:09:52.732 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.732244 | orchestrator | 19:09:52.732 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.732267 | orchestrator | 19:09:52.732 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.732297 | orchestrator | 19:09:52.732 STDOUT terraform:  + name = "testbed-node-4" 2025-06-22 19:09:52.732322 | orchestrator | 19:09:52.732 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.732360 | orchestrator | 19:09:52.732 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.732395 | orchestrator | 19:09:52.732 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.732422 | orchestrator | 19:09:52.732 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.732454 | orchestrator | 19:09:52.732 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.732502 | orchestrator | 19:09:52.732 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:09:52.732520 | orchestrator | 19:09:52.732 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.732546 | orchestrator | 19:09:52.732 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.732573 | orchestrator | 19:09:52.732 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.732601 | orchestrator | 19:09:52.732 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.732629 | orchestrator | 19:09:52.732 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.732663 | orchestrator | 19:09:52.732 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.732703 | orchestrator | 19:09:52.732 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.732711 | orchestrator | 19:09:52.732 STDOUT terraform:  } 2025-06-22 19:09:52.732732 | orchestrator | 19:09:52.732 STDOUT terraform:  + network { 2025-06-22 19:09:52.732752 | orchestrator | 19:09:52.732 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.732783 | orchestrator | 19:09:52.732 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.732814 | orchestrator | 19:09:52.732 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.732849 | orchestrator | 19:09:52.732 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.732881 | orchestrator | 19:09:52.732 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.732911 | orchestrator | 19:09:52.732 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.732941 | orchestrator | 19:09:52.732 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.732948 | orchestrator | 19:09:52.732 STDOUT terraform:  } 2025-06-22 19:09:52.732965 | orchestrator | 19:09:52.732 STDOUT terraform:  } 2025-06-22 19:09:52.733129 | orchestrator | 19:09:52.733 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-22 19:09:52.733207 | orchestrator | 19:09:52.733 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:09:52.733227 | orchestrator | 19:09:52.733 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.733262 | orchestrator | 19:09:52.733 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.733301 | orchestrator | 19:09:52.733 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.733341 | orchestrator | 19:09:52.733 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.733364 | orchestrator | 19:09:52.733 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.733386 | orchestrator | 19:09:52.733 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.733422 | orchestrator | 19:09:52.733 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.733455 | orchestrator | 19:09:52.733 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.733489 | orchestrator | 19:09:52.733 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:09:52.733511 | orchestrator | 19:09:52.733 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.733544 | orchestrator | 19:09:52.733 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.733585 | orchestrator | 19:09:52.733 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.733619 | orchestrator | 19:09:52.733 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.733654 | orchestrator | 19:09:52.733 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.733680 | orchestrator | 19:09:52.733 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.733709 | orchestrator | 19:09:52.733 STDOUT terraform:  + name = "testbed-node-5" 2025-06-22 19:09:52.733734 | orchestrator | 19:09:52.733 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.733774 | orchestrator | 19:09:52.733 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.733807 | orchestrator | 19:09:52.733 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.733835 | orchestrator | 19:09:52.733 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.733869 | orchestrator | 19:09:52.733 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.733918 | orchestrator | 19:09:52.733 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:09:52.733938 | orchestrator | 19:09:52.733 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.733963 | orchestrator | 19:09:52.733 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.733992 | orchestrator | 19:09:52.733 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.734038 | orchestrator | 19:09:52.733 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.735056 | orchestrator | 19:09:52.735 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.735077 | orchestrator | 19:09:52.735 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.735115 | orchestrator | 19:09:52.735 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.735123 | orchestrator | 19:09:52.735 STDOUT terraform:  } 2025-06-22 19:09:52.735130 | orchestrator | 19:09:52.735 STDOUT terraform:  + network { 2025-06-22 19:09:52.735156 | orchestrator | 19:09:52.735 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.735209 | orchestrator | 19:09:52.735 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.735231 | orchestrator | 19:09:52.735 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.735265 | orchestrator | 19:09:52.735 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.735301 | orchestrator | 19:09:52.735 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.735338 | orchestrator | 19:09:52.735 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.735365 | orchestrator | 19:09:52.735 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.735372 | orchestrator | 19:09:52.735 STDOUT terraform:  } 2025-06-22 19:09:52.735378 | orchestrator | 19:09:52.735 STDOUT terraform:  } 2025-06-22 19:09:52.735418 | orchestrator | 19:09:52.735 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-22 19:09:52.735456 | orchestrator | 19:09:52.735 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-22 19:09:52.735488 | orchestrator | 19:09:52.735 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-22 19:09:52.735515 | orchestrator | 19:09:52.735 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.735541 | orchestrator | 19:09:52.735 STDOUT terraform:  + name = "testbed" 2025-06-22 19:09:52.735568 | orchestrator | 19:09:52.735 STDOUT terraform:  + private_key = (sensitive value) 2025-06-22 19:09:52.735594 | orchestrator | 19:09:52.735 STDOUT terraform:  + public_key = (known after apply) 2025-06-22 19:09:52.735631 | orchestrator | 19:09:52.735 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.735662 | orchestrator | 19:09:52.735 STDOUT terraform:  + user_id = (known after apply) 2025-06-22 19:09:52.735669 | orchestrator | 19:09:52.735 STDOUT terraform:  } 2025-06-22 19:09:52.735717 | orchestrator | 19:09:52.735 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-22 19:09:52.735771 | orchestrator | 19:09:52.735 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.735798 | orchestrator | 19:09:52.735 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.735827 | orchestrator | 19:09:52.735 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.735863 | orchestrator | 19:09:52.735 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.735893 | orchestrator | 19:09:52.735 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.735921 | orchestrator | 19:09:52.735 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.735928 | orchestrator | 19:09:52.735 STDOUT terraform:  } 2025-06-22 19:09:52.735979 | orchestrator | 19:09:52.735 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-22 19:09:52.736026 | orchestrator | 19:09:52.735 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.736053 | orchestrator | 19:09:52.736 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.736082 | orchestrator | 19:09:52.736 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.736114 | orchestrator | 19:09:52.736 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.736143 | orchestrator | 19:09:52.736 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.736182 | orchestrator | 19:09:52.736 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.736207 | orchestrator | 19:09:52.736 STDOUT terraform:  } 2025-06-22 19:09:52.736254 | orchestrator | 19:09:52.736 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-22 19:09:52.736305 | orchestrator | 19:09:52.736 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.736333 | orchestrator | 19:09:52.736 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.736365 | orchestrator | 19:09:52.736 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.736394 | orchestrator | 19:09:52.736 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.736424 | orchestrator | 19:09:52.736 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.736456 | orchestrator | 19:09:52.736 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.736464 | orchestrator | 19:09:52.736 STDOUT terraform:  } 2025-06-22 19:09:52.736515 | orchestrator | 19:09:52.736 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-22 19:09:52.736560 | orchestrator | 19:09:52.736 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.736593 | orchestrator | 19:09:52.736 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.736621 | orchestrator | 19:09:52.736 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.736649 | orchestrator | 19:09:52.736 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.736987 | orchestrator | 19:09:52.736 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.737023 | orchestrator | 19:09:52.736 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.737031 | orchestrator | 19:09:52.737 STDOUT terraform:  } 2025-06-22 19:09:52.737109 | orchestrator | 19:09:52.737 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-22 19:09:52.737159 | orchestrator | 19:09:52.737 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.737208 | orchestrator | 19:09:52.737 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.737230 | orchestrator | 19:09:52.737 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.737260 | orchestrator | 19:09:52.737 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.737290 | orchestrator | 19:09:52.737 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.737323 | orchestrator | 19:09:52.737 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.737330 | orchestrator | 19:09:52.737 STDOUT terraform:  } 2025-06-22 19:09:52.737382 | orchestrator | 19:09:52.737 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-22 19:09:52.737435 | orchestrator | 19:09:52.737 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.737463 | orchestrator | 19:09:52.737 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.737494 | orchestrator | 19:09:52.737 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.737525 | orchestrator | 19:09:52.737 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.737555 | orchestrator | 19:09:52.737 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.737585 | orchestrator | 19:09:52.737 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.737593 | orchestrator | 19:09:52.737 STDOUT terraform:  } 2025-06-22 19:09:52.737646 | orchestrator | 19:09:52.737 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-22 19:09:52.737694 | orchestrator | 19:09:52.737 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.737723 | orchestrator | 19:09:52.737 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.737751 | orchestrator | 19:09:52.737 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.737783 | orchestrator | 19:09:52.737 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.737815 | orchestrator | 19:09:52.737 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.737848 | orchestrator | 19:09:52.737 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.737855 | orchestrator | 19:09:52.737 STDOUT terraform:  } 2025-06-22 19:09:52.737906 | orchestrator | 19:09:52.737 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-22 19:09:52.737953 | orchestrator | 19:09:52.737 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.737982 | orchestrator | 19:09:52.737 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.738030 | orchestrator | 19:09:52.737 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.738059 | orchestrator | 19:09:52.738 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.738089 | orchestrator | 19:09:52.738 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.738117 | orchestrator | 19:09:52.738 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.738124 | orchestrator | 19:09:52.738 STDOUT terraform:  } 2025-06-22 19:09:52.738200 | orchestrator | 19:09:52.738 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-22 19:09:52.738248 | orchestrator | 19:09:52.738 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.738284 | orchestrator | 19:09:52.738 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.738308 | orchestrator | 19:09:52.738 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.738337 | orchestrator | 19:09:52.738 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.738367 | orchestrator | 19:09:52.738 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.738395 | orchestrator | 19:09:52.738 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.738404 | orchestrator | 19:09:52.738 STDOUT terraform:  } 2025-06-22 19:09:52.738467 | orchestrator | 19:09:52.738 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-22 19:09:52.738518 | orchestrator | 19:09:52.738 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-22 19:09:52.738547 | orchestrator | 19:09:52.738 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-22 19:09:52.738576 | orchestrator | 19:09:52.738 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-22 19:09:52.738605 | orchestrator | 19:09:52.738 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.738633 | orchestrator | 19:09:52.738 STDOUT terraform:  + port_id = (known after apply) 2025-06-22 19:09:52.738664 | orchestrator | 19:09:52.738 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.738672 | orchestrator | 19:09:52.738 STDOUT terraform:  } 2025-06-22 19:09:52.738721 | orchestrator | 19:09:52.738 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-22 19:09:52.738767 | orchestrator | 19:09:52.738 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-22 19:09:52.738794 | orchestrator | 19:09:52.738 STDOUT terraform:  + address = (known after apply) 2025-06-22 19:09:52.738821 | orchestrator | 19:09:52.738 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.738846 | orchestrator | 19:09:52.738 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-22 19:09:52.738872 | orchestrator | 19:09:52.738 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.738902 | orchestrator | 19:09:52.738 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-22 19:09:52.738925 | orchestrator | 19:09:52.738 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.738951 | orchestrator | 19:09:52.738 STDOUT terraform:  + pool = "public" 2025-06-22 19:09:52.738977 | orchestrator | 19:09:52.738 STDOUT terraform:  + port_id = (known after apply) 2025-06-22 19:09:52.739004 | orchestrator | 19:09:52.738 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.739028 | orchestrator | 19:09:52.738 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.739054 | orchestrator | 19:09:52.739 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.739061 | orchestrator | 19:09:52.739 STDOUT terraform:  } 2025-06-22 19:09:52.739106 | orchestrator | 19:09:52.739 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-22 19:09:52.746092 | orchestrator | 19:09:52.739 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-22 19:09:52.746125 | orchestrator | 19:09:52.739 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.746132 | orchestrator | 19:09:52.739 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.746137 | orchestrator | 19:09:52.739 STDOUT terraform:  + availability_zone_hints = [ 2025-06-22 19:09:52.746141 | orchestrator | 19:09:52.739 STDOUT terraform:  + "nova", 2025-06-22 19:09:52.746145 | orchestrator | 19:09:52.739 STDOUT terraform:  ] 2025-06-22 19:09:52.746149 | orchestrator | 19:09:52.739 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-22 19:09:52.746153 | orchestrator | 19:09:52.739 STDOUT terraform:  + external = (known after apply) 2025-06-22 19:09:52.746157 | orchestrator | 19:09:52.739 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.746161 | orchestrator | 19:09:52.739 STDOUT terraform:  + mtu = (known after apply) 2025-06-22 19:09:52.746164 | orchestrator | 19:09:52.739 STDOUT terraform:  + name = "net-testbed-management" 2025-06-22 19:09:52.746184 | orchestrator | 19:09:52.739 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.746188 | orchestrator | 19:09:52.739 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.746192 | orchestrator | 19:09:52.739 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.746204 | orchestrator | 19:09:52.739 STDOUT terraform:  + shared = (known after apply) 2025-06-22 19:09:52.746208 | orchestrator | 19:09:52.739 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.746212 | orchestrator | 19:09:52.739 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-22 19:09:52.746216 | orchestrator | 19:09:52.739 STDOUT terraform:  + segments (known after apply) 2025-06-22 19:09:52.746219 | orchestrator | 19:09:52.739 STDOUT terraform:  } 2025-06-22 19:09:52.746224 | orchestrator | 19:09:52.739 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-22 19:09:52.746228 | orchestrator | 19:09:52.739 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-22 19:09:52.746232 | orchestrator | 19:09:52.739 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.746236 | orchestrator | 19:09:52.739 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.746240 | orchestrator | 19:09:52.739 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.746244 | orchestrator | 19:09:52.740 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.746247 | orchestrator | 19:09:52.740 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.746258 | orchestrator | 19:09:52.740 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.746262 | orchestrator | 19:09:52.740 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.746266 | orchestrator | 19:09:52.740 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.746270 | orchestrator | 19:09:52.740 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.746274 | orchestrator | 19:09:52.740 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.746278 | orchestrator | 19:09:52.740 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.746281 | orchestrator | 19:09:52.740 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.746285 | orchestrator | 19:09:52.740 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.746289 | orchestrator | 19:09:52.740 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.746302 | orchestrator | 19:09:52.740 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.746307 | orchestrator | 19:09:52.740 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.746311 | orchestrator | 19:09:52.740 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746314 | orchestrator | 19:09:52.740 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.746318 | orchestrator | 19:09:52.740 STDOUT terraform:  } 2025-06-22 19:09:52.746322 | orchestrator | 19:09:52.740 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746326 | orchestrator | 19:09:52.740 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.746330 | orchestrator | 19:09:52.740 STDOUT terraform:  } 2025-06-22 19:09:52.746334 | orchestrator | 19:09:52.740 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.746338 | orchestrator | 19:09:52.740 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.746342 | orchestrator | 19:09:52.740 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-22 19:09:52.746346 | orchestrator | 19:09:52.740 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.746350 | orchestrator | 19:09:52.740 STDOUT terraform:  } 2025-06-22 19:09:52.746354 | orchestrator | 19:09:52.740 STDOUT terraform:  } 2025-06-22 19:09:52.746358 | orchestrator | 19:09:52.740 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-22 19:09:52.746362 | orchestrator | 19:09:52.740 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:09:52.746366 | orchestrator | 19:09:52.740 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.746370 | orchestrator | 19:09:52.740 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.746374 | orchestrator | 19:09:52.740 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.746378 | orchestrator | 19:09:52.740 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.746382 | orchestrator | 19:09:52.740 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.746390 | orchestrator | 19:09:52.740 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.746394 | orchestrator | 19:09:52.740 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.746398 | orchestrator | 19:09:52.740 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.746402 | orchestrator | 19:09:52.740 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.746409 | orchestrator | 19:09:52.741 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.746413 | orchestrator | 19:09:52.741 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.746417 | orchestrator | 19:09:52.741 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.746421 | orchestrator | 19:09:52.741 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.746424 | orchestrator | 19:09:52.741 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.746428 | orchestrator | 19:09:52.741 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.746432 | orchestrator | 19:09:52.741 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.746436 | orchestrator | 19:09:52.741 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746440 | orchestrator | 19:09:52.741 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.746444 | orchestrator | 19:09:52.741 STDOUT terraform:  } 2025-06-22 19:09:52.746448 | orchestrator | 19:09:52.741 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746452 | orchestrator | 19:09:52.741 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:09:52.746456 | orchestrator | 19:09:52.741 STDOUT terraform:  } 2025-06-22 19:09:52.746460 | orchestrator | 19:09:52.741 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746471 | orchestrator | 19:09:52.741 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.746476 | orchestrator | 19:09:52.741 STDOUT terraform:  } 2025-06-22 19:09:52.746480 | orchestrator | 19:09:52.741 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746483 | orchestrator | 19:09:52.741 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:09:52.746487 | orchestrator | 19:09:52.741 STDOUT terraform:  } 2025-06-22 19:09:52.746491 | orchestrator | 19:09:52.741 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.746495 | orchestrator | 19:09:52.741 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.746499 | orchestrator | 19:09:52.741 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-22 19:09:52.746503 | orchestrator | 19:09:52.741 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.746507 | orchestrator | 19:09:52.741 STDOUT terraform:  } 2025-06-22 19:09:52.746511 | orchestrator | 19:09:52.741 STDOUT terraform:  } 2025-06-22 19:09:52.746515 | orchestrator | 19:09:52.741 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-22 19:09:52.746519 | orchestrator | 19:09:52.741 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:09:52.746527 | orchestrator | 19:09:52.741 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.746531 | orchestrator | 19:09:52.741 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.746535 | orchestrator | 19:09:52.741 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.746541 | orchestrator | 19:09:52.741 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.746545 | orchestrator | 19:09:52.741 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.746549 | orchestrator | 19:09:52.741 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.746553 | orchestrator | 19:09:52.741 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.746557 | orchestrator | 19:09:52.741 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.746561 | orchestrator | 19:09:52.741 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.746564 | orchestrator | 19:09:52.741 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.746568 | orchestrator | 19:09:52.741 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.746572 | orchestrator | 19:09:52.741 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.746576 | orchestrator | 19:09:52.741 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.746580 | orchestrator | 19:09:52.742 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.746583 | orchestrator | 19:09:52.742 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.746587 | orchestrator | 19:09:52.742 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.746591 | orchestrator | 19:09:52.742 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746595 | orchestrator | 19:09:52.742 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.746599 | orchestrator | 19:09:52.742 STDOUT terraform:  } 2025-06-22 19:09:52.746603 | orchestrator | 19:09:52.742 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746607 | orchestrator | 19:09:52.742 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:09:52.746611 | orchestrator | 19:09:52.742 STDOUT terraform:  } 2025-06-22 19:09:52.746614 | orchestrator | 19:09:52.742 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746618 | orchestrator | 19:09:52.742 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.746622 | orchestrator | 19:09:52.742 STDOUT terraform:  } 2025-06-22 19:09:52.746626 | orchestrator | 19:09:52.742 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746634 | orchestrator | 19:09:52.742 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:09:52.746638 | orchestrator | 19:09:52.742 STDOUT terraform:  } 2025-06-22 19:09:52.746642 | orchestrator | 19:09:52.742 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.746646 | orchestrator | 19:09:52.742 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.746653 | orchestrator | 19:09:52.742 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-22 19:09:52.746657 | orchestrator | 19:09:52.742 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.746661 | orchestrator | 19:09:52.742 STDOUT terraform:  } 2025-06-22 19:09:52.746665 | orchestrator | 19:09:52.742 STDOUT terraform:  } 2025-06-22 19:09:52.746669 | orchestrator | 19:09:52.742 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-22 19:09:52.746673 | orchestrator | 19:09:52.742 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:09:52.746677 | orchestrator | 19:09:52.742 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.746681 | orchestrator | 19:09:52.742 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.746685 | orchestrator | 19:09:52.742 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.746688 | orchestrator | 19:09:52.742 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.746692 | orchestrator | 19:09:52.742 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.746696 | orchestrator | 19:09:52.742 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.746700 | orchestrator | 19:09:52.742 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.746704 | orchestrator | 19:09:52.742 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.746708 | orchestrator | 19:09:52.742 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.746712 | orchestrator | 19:09:52.743 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.746716 | orchestrator | 19:09:52.743 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.746720 | orchestrator | 19:09:52.743 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.746723 | orchestrator | 19:09:52.743 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.746727 | orchestrator | 19:09:52.743 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.746731 | orchestrator | 19:09:52.743 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.746735 | orchestrator | 19:09:52.743 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.746739 | orchestrator | 19:09:52.743 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746743 | orchestrator | 19:09:52.743 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.746747 | orchestrator | 19:09:52.743 STDOUT terraform:  } 2025-06-22 19:09:52.746751 | orchestrator | 19:09:52.743 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746755 | orchestrator | 19:09:52.743 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:09:52.746758 | orchestrator | 19:09:52.743 STDOUT terraform:  } 2025-06-22 19:09:52.746762 | orchestrator | 19:09:52.743 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746766 | orchestrator | 19:09:52.743 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.746773 | orchestrator | 19:09:52.743 STDOUT terraform:  } 2025-06-22 19:09:52.746777 | orchestrator | 19:09:52.743 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746781 | orchestrator | 19:09:52.743 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:09:52.746785 | orchestrator | 19:09:52.743 STDOUT terraform:  } 2025-06-22 19:09:52.746789 | orchestrator | 19:09:52.743 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.746793 | orchestrator | 19:09:52.743 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.746801 | orchestrator | 19:09:52.743 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-22 19:09:52.746805 | orchestrator | 19:09:52.743 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.746809 | orchestrator | 19:09:52.743 STDOUT terraform:  } 2025-06-22 19:09:52.746813 | orchestrator | 19:09:52.743 STDOUT terraform:  } 2025-06-22 19:09:52.746816 | orchestrator | 19:09:52.743 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-22 19:09:52.746820 | orchestrator | 19:09:52.743 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:09:52.746824 | orchestrator | 19:09:52.743 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.746828 | orchestrator | 19:09:52.743 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.746832 | orchestrator | 19:09:52.743 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.746836 | orchestrator | 19:09:52.743 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.746840 | orchestrator | 19:09:52.743 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.746844 | orchestrator | 19:09:52.743 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.746850 | orchestrator | 19:09:52.743 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.746854 | orchestrator | 19:09:52.743 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.746860 | orchestrator | 19:09:52.743 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.746864 | orchestrator | 19:09:52.743 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.746868 | orchestrator | 19:09:52.743 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.746872 | orchestrator | 19:09:52.743 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.746876 | orchestrator | 19:09:52.744 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.746880 | orchestrator | 19:09:52.744 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.746884 | orchestrator | 19:09:52.744 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.746887 | orchestrator | 19:09:52.744 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.746891 | orchestrator | 19:09:52.744 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746895 | orchestrator | 19:09:52.744 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.746904 | orchestrator | 19:09:52.744 STDOUT terraform:  } 2025-06-22 19:09:52.746908 | orchestrator | 19:09:52.744 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746911 | orchestrator | 19:09:52.744 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:09:52.746915 | orchestrator | 19:09:52.744 STDOUT terraform:  } 2025-06-22 19:09:52.746919 | orchestrator | 19:09:52.744 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746923 | orchestrator | 19:09:52.744 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.746927 | orchestrator | 19:09:52.744 STDOUT terraform:  } 2025-06-22 19:09:52.746931 | orchestrator | 19:09:52.744 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.746934 | orchestrator | 19:09:52.744 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:09:52.746938 | orchestrator | 19:09:52.744 STDOUT terraform:  } 2025-06-22 19:09:52.746942 | orchestrator | 19:09:52.744 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.746946 | orchestrator | 19:09:52.744 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.746950 | orchestrator | 19:09:52.744 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-22 19:09:52.746954 | orchestrator | 19:09:52.744 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.746958 | orchestrator | 19:09:52.744 STDOUT terraform:  } 2025-06-22 19:09:52.746965 | orchestrator | 19:09:52.744 STDOUT terraform:  } 2025-06-22 19:09:52.746970 | orchestrator | 19:09:52.744 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-22 19:09:52.746974 | orchestrator | 19:09:52.744 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:09:52.746978 | orchestrator | 19:09:52.744 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.746982 | orchestrator | 19:09:52.744 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.746985 | orchestrator | 19:09:52.744 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.746989 | orchestrator | 19:09:52.744 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.746993 | orchestrator | 19:09:52.744 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.746997 | orchestrator | 19:09:52.744 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.747001 | orchestrator | 19:09:52.744 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.747005 | orchestrator | 19:09:52.744 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.747009 | orchestrator | 19:09:52.744 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.747013 | orchestrator | 19:09:52.744 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.747016 | orchestrator | 19:09:52.744 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.747022 | orchestrator | 19:09:52.744 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.747031 | orchestrator | 19:09:52.744 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.747035 | orchestrator | 19:09:52.744 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.747039 | orchestrator | 19:09:52.744 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.747043 | orchestrator | 19:09:52.745 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.747047 | orchestrator | 19:09:52.745 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.747051 | orchestrator | 19:09:52.745 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.747055 | orchestrator | 19:09:52.745 STDOUT terraform:  } 2025-06-22 19:09:52.747059 | orchestrator | 19:09:52.745 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.747062 | orchestrator | 19:09:52.745 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:09:52.747066 | orchestrator | 19:09:52.745 STDOUT terraform:  } 2025-06-22 19:09:52.747070 | orchestrator | 19:09:52.745 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.747074 | orchestrator | 19:09:52.745 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.747078 | orchestrator | 19:09:52.745 STDOUT terraform:  } 2025-06-22 19:09:52.747082 | orchestrator | 19:09:52.745 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.747086 | orchestrator | 19:09:52.745 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:09:52.747089 | orchestrator | 19:09:52.745 STDOUT terraform:  } 2025-06-22 19:09:52.747093 | orchestrator | 19:09:52.745 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.747097 | orchestrator | 19:09:52.745 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.747101 | orchestrator | 19:09:52.745 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-22 19:09:52.747105 | orchestrator | 19:09:52.745 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.747109 | orchestrator | 19:09:52.745 STDOUT terraform:  } 2025-06-22 19:09:52.747113 | orchestrator | 19:09:52.745 STDOUT terraform:  } 2025-06-22 19:09:52.747117 | orchestrator | 19:09:52.745 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-22 19:09:52.747124 | orchestrator | 19:09:52.745 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:09:52.747129 | orchestrator | 19:09:52.745 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.747133 | orchestrator | 19:09:52.745 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.747136 | orchestrator | 19:09:52.746 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.747140 | orchestrator | 19:09:52.746 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.747144 | orchestrator | 19:09:52.746 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.747148 | orchestrator | 19:09:52.746 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.747152 | orchestrator | 19:09:52.746 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.747159 | orchestrator | 19:09:52.746 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.747163 | orchestrator | 19:09:52.746 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.747180 | orchestrator | 19:09:52.746 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.747183 | orchestrator | 19:09:52.746 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.747187 | orchestrator | 19:09:52.746 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.747191 | orchestrator | 19:09:52.746 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.747195 | orchestrator | 19:09:52.746 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.747199 | orchestrator | 19:09:52.746 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.747203 | orchestrator | 19:09:52.746 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.747207 | orchestrator | 19:09:52.746 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.747211 | orchestrator | 19:09:52.746 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.747215 | orchestrator | 19:09:52.746 STDOUT terraform:  } 2025-06-22 19:09:52.747218 | orchestrator | 19:09:52.746 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.747222 | orchestrator | 19:09:52.746 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:09:52.747226 | orchestrator | 19:09:52.746 STDOUT terraform:  } 2025-06-22 19:09:52.747230 | orchestrator | 19:09:52.746 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.747234 | orchestrator | 19:09:52.746 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.747238 | orchestrator | 19:09:52.746 STDOUT terraform:  } 2025-06-22 19:09:52.747242 | orchestrator | 19:09:52.746 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.747245 | orchestrator | 19:09:52.746 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:09:52.747249 | orchestrator | 19:09:52.746 STDOUT terraform:  } 2025-06-22 19:09:52.747253 | orchestrator | 19:09:52.746 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.747257 | orchestrator | 19:09:52.746 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.747261 | orchestrator | 19:09:52.746 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-22 19:09:52.747265 | orchestrator | 19:09:52.746 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.747269 | orchestrator | 19:09:52.746 STDOUT terraform:  } 2025-06-22 19:09:52.747273 | orchestrator | 19:09:52.746 STDOUT terraform:  } 2025-06-22 19:09:52.747277 | orchestrator | 19:09:52.746 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-22 19:09:52.747281 | orchestrator | 19:09:52.746 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-22 19:09:52.747285 | orchestrator | 19:09:52.746 STDOUT terraform:  + force_destroy = false 2025-06-22 19:09:52.747288 | orchestrator | 19:09:52.746 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.747299 | orchestrator | 19:09:52.746 STDOUT terraform:  + port_id = (known after apply) 2025-06-22 19:09:52.747303 | orchestrator | 19:09:52.747 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.747307 | orchestrator | 19:09:52.747 STDOUT terraform:  + router_id = (known after apply) 2025-06-22 19:09:52.747311 | orchestrator | 19:09:52.747 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.747315 | orchestrator | 19:09:52.747 STDOUT terraform:  } 2025-06-22 19:09:52.747319 | orchestrator | 19:09:52.747 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-22 19:09:52.747323 | orchestrator | 19:09:52.747 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-22 19:09:52.747327 | orchestrator | 19:09:52.747 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.747331 | orchestrator | 19:09:52.747 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.747335 | orchestrator | 19:09:52.747 STDOUT terraform:  + availability_zone_hints = [ 2025-06-22 19:09:52.747340 | orchestrator | 19:09:52.747 STDOUT terraform:  + "nova", 2025-06-22 19:09:52.747344 | orchestrator | 19:09:52.747 STDOUT terraform:  ] 2025-06-22 19:09:52.747350 | orchestrator | 19:09:52.747 STDOUT terraform:  + distributed = (known after apply) 2025-06-22 19:09:52.747378 | orchestrator | 19:09:52.747 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-22 19:09:52.747426 | orchestrator | 19:09:52.747 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-22 19:09:52.747472 | orchestrator | 19:09:52.747 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-06-22 19:09:52.747501 | orchestrator | 19:09:52.747 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.747530 | orchestrator | 19:09:52.747 STDOUT terraform:  + name = "testbed" 2025-06-22 19:09:52.747567 | orchestrator | 19:09:52.747 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.747603 | orchestrator | 19:09:52.747 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.747631 | orchestrator | 19:09:52.747 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-22 19:09:52.747638 | orchestrator | 19:09:52.747 STDOUT terraform:  } 2025-06-22 19:09:52.747698 | orchestrator | 19:09:52.747 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-22 19:09:52.747765 | orchestrator | 19:09:52.747 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-22 19:09:52.747790 | orchestrator | 19:09:52.747 STDOUT terraform:  + description = "ssh" 2025-06-22 19:09:52.747822 | orchestrator | 19:09:52.747 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.747846 | orchestrator | 19:09:52.747 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.747883 | orchestrator | 19:09:52.747 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.747906 | orchestrator | 19:09:52.747 STDOUT terraform:  + port_range_max = 22 2025-06-22 19:09:52.747935 | orchestrator | 19:09:52.747 STDOUT terraform:  + port_range_min = 22 2025-06-22 19:09:52.747962 | orchestrator | 19:09:52.747 STDOUT terraform:  + protocol = "tcp" 2025-06-22 19:09:52.747999 | orchestrator | 19:09:52.747 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.748043 | orchestrator | 19:09:52.747 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.748088 | orchestrator | 19:09:52.748 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.748117 | orchestrator | 19:09:52.748 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.748153 | orchestrator | 19:09:52.748 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.748211 | orchestrator | 19:09:52.748 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.748220 | orchestrator | 19:09:52.748 STDOUT terraform:  } 2025-06-22 19:09:52.748272 | orchestrator | 19:09:52.748 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-22 19:09:52.748326 | orchestrator | 19:09:52.748 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-22 19:09:52.748357 | orchestrator | 19:09:52.748 STDOUT terraform:  + description = "wireguard" 2025-06-22 19:09:52.748388 | orchestrator | 19:09:52.748 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.748416 | orchestrator | 19:09:52.748 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.748454 | orchestrator | 19:09:52.748 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.748482 | orchestrator | 19:09:52.748 STDOUT terraform:  + port_range_max = 51820 2025-06-22 19:09:52.748512 | orchestrator | 19:09:52.748 STDOUT terraform:  + port_range_min = 51820 2025-06-22 19:09:52.748538 | orchestrator | 19:09:52.748 STDOUT terraform:  + protocol = "udp" 2025-06-22 19:09:52.748574 | orchestrator | 19:09:52.748 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.748610 | orchestrator | 19:09:52.748 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.748649 | orchestrator | 19:09:52.748 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.748681 | orchestrator | 19:09:52.748 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.748721 | orchestrator | 19:09:52.748 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.748754 | orchestrator | 19:09:52.748 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.748762 | orchestrator | 19:09:52.748 STDOUT terraform:  } 2025-06-22 19:09:52.748827 | orchestrator | 19:09:52.748 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-22 19:09:52.748877 | orchestrator | 19:09:52.748 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-22 19:09:52.748905 | orchestrator | 19:09:52.748 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.748930 | orchestrator | 19:09:52.748 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.748971 | orchestrator | 19:09:52.748 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.748999 | orchestrator | 19:09:52.748 STDOUT terraform:  + protocol = "tcp" 2025-06-22 19:09:52.749042 | orchestrator | 19:09:52.748 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.749074 | orchestrator | 19:09:52.749 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.749114 | orchestrator | 19:09:52.749 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.749149 | orchestrator | 19:09:52.749 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-22 19:09:52.749367 | orchestrator | 19:09:52.749 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.749432 | orchestrator | 19:09:52.749 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.749440 | orchestrator | 19:09:52.749 STDOUT terraform:  } 2025-06-22 19:09:52.749498 | orchestrator | 19:09:52.749 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-22 19:09:52.749550 | orchestrator | 19:09:52.749 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-22 19:09:52.749579 | orchestrator | 19:09:52.749 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.749603 | orchestrator | 19:09:52.749 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.749645 | orchestrator | 19:09:52.749 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.749674 | orchestrator | 19:09:52.749 STDOUT terraform:  + protocol = "udp" 2025-06-22 19:09:52.749711 | orchestrator | 19:09:52.749 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.749745 | orchestrator | 19:09:52.749 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.749791 | orchestrator | 19:09:52.749 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.749830 | orchestrator | 19:09:52.749 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-22 19:09:52.749868 | orchestrator | 19:09:52.749 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.749904 | orchestrator | 19:09:52.749 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.749912 | orchestrator | 19:09:52.749 STDOUT terraform:  } 2025-06-22 19:09:52.749965 | orchestrator | 19:09:52.749 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-22 19:09:52.750039 | orchestrator | 19:09:52.749 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-22 19:09:52.750073 | orchestrator | 19:09:52.750 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.750102 | orchestrator | 19:09:52.750 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.750145 | orchestrator | 19:09:52.750 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.750184 | orchestrator | 19:09:52.750 STDOUT terraform:  + protocol = "icmp" 2025-06-22 19:09:52.750231 | orchestrator | 19:09:52.750 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.750263 | orchestrator | 19:09:52.750 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.750323 | orchestrator | 19:09:52.750 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.750353 | orchestrator | 19:09:52.750 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.750389 | orchestrator | 19:09:52.750 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.750426 | orchestrator | 19:09:52.750 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.750434 | orchestrator | 19:09:52.750 STDOUT terraform:  } 2025-06-22 19:09:52.750486 | orchestrator | 19:09:52.750 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-22 19:09:52.750536 | orchestrator | 19:09:52.750 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-22 19:09:52.750563 | orchestrator | 19:09:52.750 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.750595 | orchestrator | 19:09:52.750 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.750635 | orchestrator | 19:09:52.750 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.750666 | orchestrator | 19:09:52.750 STDOUT terraform:  + protocol = "tcp" 2025-06-22 19:09:52.750704 | orchestrator | 19:09:52.750 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.750738 | orchestrator | 19:09:52.750 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.750780 | orchestrator | 19:09:52.750 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.750808 | orchestrator | 19:09:52.750 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.750846 | orchestrator | 19:09:52.750 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.750883 | orchestrator | 19:09:52.750 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.750893 | orchestrator | 19:09:52.750 STDOUT terraform:  } 2025-06-22 19:09:52.750949 | orchestrator | 19:09:52.750 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-22 19:09:52.751004 | orchestrator | 19:09:52.750 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-22 19:09:52.751034 | orchestrator | 19:09:52.750 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.751059 | orchestrator | 19:09:52.751 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.751097 | orchestrator | 19:09:52.751 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.751120 | orchestrator | 19:09:52.751 STDOUT terraform:  + protocol = "udp" 2025-06-22 19:09:52.751162 | orchestrator | 19:09:52.751 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.751212 | orchestrator | 19:09:52.751 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.751252 | orchestrator | 19:09:52.751 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.751280 | orchestrator | 19:09:52.751 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.751324 | orchestrator | 19:09:52.751 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.751367 | orchestrator | 19:09:52.751 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.751375 | orchestrator | 19:09:52.751 STDOUT terraform:  } 2025-06-22 19:09:52.751425 | orchestrator | 19:09:52.751 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-22 19:09:52.751477 | orchestrator | 19:09:52.751 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-22 19:09:52.751511 | orchestrator | 19:09:52.751 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.751541 | orchestrator | 19:09:52.751 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.751578 | orchestrator | 19:09:52.751 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.751605 | orchestrator | 19:09:52.751 STDOUT terraform:  + protocol = "icmp" 2025-06-22 19:09:52.751643 | orchestrator | 19:09:52.751 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.751677 | orchestrator | 19:09:52.751 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.751714 | orchestrator | 19:09:52.751 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.751749 | orchestrator | 19:09:52.751 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.751783 | orchestrator | 19:09:52.751 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.754839 | orchestrator | 19:09:52.751 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.754867 | orchestrator | 19:09:52.751 STDOUT terraform:  } 2025-06-22 19:09:52.754873 | orchestrator | 19:09:52.751 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-22 19:09:52.754878 | orchestrator | 19:09:52.751 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-22 19:09:52.754882 | orchestrator | 19:09:52.751 STDOUT terraform:  + description = "vrrp" 2025-06-22 19:09:52.754886 | orchestrator | 19:09:52.751 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.754890 | orchestrator | 19:09:52.751 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.754921 | orchestrator | 19:09:52.751 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.754926 | orchestrator | 19:09:52.752 STDOUT terraform:  + protocol = "112" 2025-06-22 19:09:52.754930 | orchestrator | 19:09:52.752 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.754934 | orchestrator | 19:09:52.752 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.754938 | orchestrator | 19:09:52.752 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.754942 | orchestrator | 19:09:52.752 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.754953 | orchestrator | 19:09:52.752 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.754957 | orchestrator | 19:09:52.752 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.754960 | orchestrator | 19:09:52.752 STDOUT terraform:  } 2025-06-22 19:09:52.754964 | orchestrator | 19:09:52.752 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-22 19:09:52.754969 | orchestrator | 19:09:52.752 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-22 19:09:52.754973 | orchestrator | 19:09:52.752 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.754976 | orchestrator | 19:09:52.752 STDOUT terraform:  + description = "management security group" 2025-06-22 19:09:52.754980 | orchestrator | 19:09:52.752 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.754984 | orchestrator | 19:09:52.752 STDOUT terraform:  + name = "testbed-management" 2025-06-22 19:09:52.754988 | orchestrator | 19:09:52.752 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.754992 | orchestrator | 19:09:52.752 STDOUT terraform:  + stateful = (known after apply) 2025-06-22 19:09:52.754996 | orchestrator | 19:09:52.752 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.755000 | orchestrator | 19:09:52.752 STDOUT terraform:  } 2025-06-22 19:09:52.755003 | orchestrator | 19:09:52.752 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-22 19:09:52.755007 | orchestrator | 19:09:52.752 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-22 19:09:52.755014 | orchestrator | 19:09:52.752 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.755018 | orchestrator | 19:09:52.752 STDOUT terraform:  + description = "node security group" 2025-06-22 19:09:52.755021 | orchestrator | 19:09:52.752 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.755025 | orchestrator | 19:09:52.752 STDOUT terraform:  + name = "testbed-node" 2025-06-22 19:09:52.755029 | orchestrator | 19:09:52.752 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.755033 | orchestrator | 19:09:52.752 STDOUT terraform:  + stateful = (known after apply) 2025-06-22 19:09:52.755037 | orchestrator | 19:09:52.752 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.755041 | orchestrator | 19:09:52.752 STDOUT terraform:  } 2025-06-22 19:09:52.755052 | orchestrator | 19:09:52.752 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-22 19:09:52.755057 | orchestrator | 19:09:52.752 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-22 19:09:52.755061 | orchestrator | 19:09:52.752 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.755064 | orchestrator | 19:09:52.752 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-22 19:09:52.755068 | orchestrator | 19:09:52.752 STDOUT terraform:  + dns_nameservers = [ 2025-06-22 19:09:52.755072 | orchestrator | 19:09:52.752 STDOUT terraform:  + "8.8.8.8", 2025-06-22 19:09:52.755076 | orchestrator | 19:09:52.752 STDOUT terraform:  + "9.9.9.9", 2025-06-22 19:09:52.755083 | orchestrator | 19:09:52.752 STDOUT terraform:  ] 2025-06-22 19:09:52.755087 | orchestrator | 19:09:52.752 STDOUT terraform:  + enable_dhcp = true 2025-06-22 19:09:52.755091 | orchestrator | 19:09:52.752 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-22 19:09:52.755095 | orchestrator | 19:09:52.752 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.755099 | orchestrator | 19:09:52.753 STDOUT terraform:  + ip_version = 4 2025-06-22 19:09:52.755103 | orchestrator | 19:09:52.753 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-22 19:09:52.755107 | orchestrator | 19:09:52.753 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-22 19:09:52.755111 | orchestrator | 19:09:52.753 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-22 19:09:52.755115 | orchestrator | 19:09:52.753 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.755118 | orchestrator | 19:09:52.753 STDOUT terraform:  + no_gateway = false 2025-06-22 19:09:52.755122 | orchestrator | 19:09:52.753 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.755126 | orchestrator | 19:09:52.753 STDOUT terraform:  + service_types = (known after apply) 2025-06-22 19:09:52.755130 | orchestrator | 19:09:52.753 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.755134 | orchestrator | 19:09:52.753 STDOUT terraform:  + allocation_pool { 2025-06-22 19:09:52.755138 | orchestrator | 19:09:52.753 STDOUT terraform:  + end = "192.168.31.250" 2025-06-22 19:09:52.755142 | orchestrator | 19:09:52.753 STDOUT terraform:  + start = "192.168.31.200" 2025-06-22 19:09:52.755145 | orchestrator | 19:09:52.753 STDOUT terraform:  } 2025-06-22 19:09:52.755149 | orchestrator | 19:09:52.753 STDOUT terraform:  } 2025-06-22 19:09:52.755153 | orchestrator | 19:09:52.753 STDOUT terraform:  # terraform_data.image will be created 2025-06-22 19:09:52.755157 | orchestrator | 19:09:52.753 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-22 19:09:52.755161 | orchestrator | 19:09:52.753 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.755204 | orchestrator | 19:09:52.753 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-22 19:09:52.755212 | orchestrator | 19:09:52.753 STDOUT terraform:  + output = (known after apply) 2025-06-22 19:09:52.755218 | orchestrator | 19:09:52.753 STDOUT terraform:  } 2025-06-22 19:09:52.755224 | orchestrator | 19:09:52.753 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-22 19:09:52.755231 | orchestrator | 19:09:52.753 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-22 19:09:52.755235 | orchestrator | 19:09:52.753 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.755239 | orchestrator | 19:09:52.753 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-22 19:09:52.755243 | orchestrator | 19:09:52.753 STDOUT terraform:  + output = (known after apply) 2025-06-22 19:09:52.755247 | orchestrator | 19:09:52.753 STDOUT terraform:  } 2025-06-22 19:09:52.755250 | orchestrator | 19:09:52.753 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-22 19:09:52.755254 | orchestrator | 19:09:52.753 STDOUT terraform: Changes to Outputs: 2025-06-22 19:09:52.755262 | orchestrator | 19:09:52.753 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-22 19:09:52.755266 | orchestrator | 19:09:52.753 STDOUT terraform:  + private_key = (sensitive value) 2025-06-22 19:09:52.960051 | orchestrator | 19:09:52.959 STDOUT terraform: terraform_data.image: Creating... 2025-06-22 19:09:52.960117 | orchestrator | 19:09:52.959 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=f7c0df33-b458-076e-2cf7-1f2d489d913e] 2025-06-22 19:09:52.961510 | orchestrator | 19:09:52.960 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-22 19:09:52.961546 | orchestrator | 19:09:52.960 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=57fb6a5a-7dcf-24a0-a002-bc09375e82f2] 2025-06-22 19:09:52.989716 | orchestrator | 19:09:52.989 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-22 19:09:52.999409 | orchestrator | 19:09:52.999 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-22 19:09:53.010094 | orchestrator | 19:09:53.009 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-22 19:09:53.011283 | orchestrator | 19:09:53.011 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-22 19:09:53.012431 | orchestrator | 19:09:53.012 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-22 19:09:53.014109 | orchestrator | 19:09:53.013 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-22 19:09:53.014727 | orchestrator | 19:09:53.014 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-22 19:09:53.015551 | orchestrator | 19:09:53.015 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-22 19:09:53.026235 | orchestrator | 19:09:53.026 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-22 19:09:53.026740 | orchestrator | 19:09:53.026 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-22 19:09:53.463159 | orchestrator | 19:09:53.462 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-22 19:09:53.469954 | orchestrator | 19:09:53.469 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-22 19:09:53.666540 | orchestrator | 19:09:53.666 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-06-22 19:09:53.670603 | orchestrator | 19:09:53.670 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-22 19:09:53.729423 | orchestrator | 19:09:53.728 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-22 19:09:53.738901 | orchestrator | 19:09:53.738 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-22 19:09:59.087465 | orchestrator | 19:09:59.087 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=cbc2739d-a0a0-481f-9090-e6b922cff60b] 2025-06-22 19:09:59.096699 | orchestrator | 19:09:59.096 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-22 19:10:03.012299 | orchestrator | 19:10:03.011 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-22 19:10:03.012387 | orchestrator | 19:10:03.012 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-22 19:10:03.013317 | orchestrator | 19:10:03.013 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-22 19:10:03.015748 | orchestrator | 19:10:03.015 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-22 19:10:03.015840 | orchestrator | 19:10:03.015 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-22 19:10:03.028125 | orchestrator | 19:10:03.027 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-22 19:10:03.028490 | orchestrator | 19:10:03.028 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-22 19:10:03.471023 | orchestrator | 19:10:03.470 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-22 19:10:03.648983 | orchestrator | 19:10:03.648 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=b3712533-4ba6-4a13-8d22-1afd9c8ce6f2] 2025-06-22 19:10:03.654137 | orchestrator | 19:10:03.653 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=f438dec5-52e6-4e07-b468-2b34fd5e0bbc] 2025-06-22 19:10:03.662075 | orchestrator | 19:10:03.661 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=9d381e45-09fd-4a20-ab1c-6f33bb7ad47a] 2025-06-22 19:10:03.666675 | orchestrator | 19:10:03.662 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-22 19:10:03.670121 | orchestrator | 19:10:03.669 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-22 19:10:03.680624 | orchestrator | 19:10:03.680 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=d397d31c-b886-4607-b3cb-2d758622dade] 2025-06-22 19:10:03.681732 | orchestrator | 19:10:03.681 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-22 19:10:03.685612 | orchestrator | 19:10:03.685 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=10758f5a-a518-4894-b68c-79c541e050d1] 2025-06-22 19:10:03.689168 | orchestrator | 19:10:03.689 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-22 19:10:03.695888 | orchestrator | 19:10:03.695 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-22 19:10:03.707073 | orchestrator | 19:10:03.706 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=f12434e6-788f-4ffb-a434-d641146d84ae] 2025-06-22 19:10:03.708261 | orchestrator | 19:10:03.707 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=66d4c0b6-de40-44d2-a991-376660387b3d] 2025-06-22 19:10:03.726680 | orchestrator | 19:10:03.726 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-22 19:10:03.731258 | orchestrator | 19:10:03.731 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-22 19:10:03.737833 | orchestrator | 19:10:03.737 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=716b470aec99a4bdbc4648a5a71c0d69d53a5ae2] 2025-06-22 19:10:03.738991 | orchestrator | 19:10:03.738 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=7cac0b6c55a8e61df123bc978732f19cf39df0cb] 2025-06-22 19:10:03.739052 | orchestrator | 19:10:03.738 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-22 19:10:03.748227 | orchestrator | 19:10:03.747 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-22 19:10:03.748856 | orchestrator | 19:10:03.748 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-22 19:10:03.925973 | orchestrator | 19:10:03.925 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=986f77d9-7eeb-491e-bdbe-4c9e8ad066d2] 2025-06-22 19:10:04.090075 | orchestrator | 19:10:04.089 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=ca04149b-3774-4fe5-a4a8-e7007e740a3b] 2025-06-22 19:10:09.099043 | orchestrator | 19:10:09.098 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-22 19:10:09.404543 | orchestrator | 19:10:09.404 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=215d9bf5-4869-41a3-a63c-a129ce87d105] 2025-06-22 19:10:09.753047 | orchestrator | 19:10:09.752 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=724691e6-c899-4161-81cf-4d85454545ad] 2025-06-22 19:10:09.761286 | orchestrator | 19:10:09.761 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-22 19:10:13.665978 | orchestrator | 19:10:13.665 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-22 19:10:13.670297 | orchestrator | 19:10:13.669 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-22 19:10:13.683499 | orchestrator | 19:10:13.683 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-22 19:10:13.690814 | orchestrator | 19:10:13.690 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-22 19:10:13.696129 | orchestrator | 19:10:13.695 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-22 19:10:13.750387 | orchestrator | 19:10:13.750 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-22 19:10:14.064917 | orchestrator | 19:10:14.064 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=e5fdbcb2-2928-4111-aea0-2f8879b135c3] 2025-06-22 19:10:14.076143 | orchestrator | 19:10:14.075 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=0983c77d-7f56-479f-b361-5b63b2990634] 2025-06-22 19:10:14.129648 | orchestrator | 19:10:14.129 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4] 2025-06-22 19:10:14.161955 | orchestrator | 19:10:14.161 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=3ad08e50-43a2-44f4-b591-5a498ff5d4c6] 2025-06-22 19:10:14.180095 | orchestrator | 19:10:14.179 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=0f50b971-8498-45b8-bf00-ff2a09b130da] 2025-06-22 19:10:14.185118 | orchestrator | 19:10:14.184 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=396e8a64-22ab-4924-be85-f2df1fce7ca0] 2025-06-22 19:10:17.627349 | orchestrator | 19:10:17.626 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=d0b34739-f705-4fb4-978e-f34fd267b579] 2025-06-22 19:10:17.633952 | orchestrator | 19:10:17.633 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-22 19:10:17.637048 | orchestrator | 19:10:17.636 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-22 19:10:17.639600 | orchestrator | 19:10:17.639 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-22 19:10:17.851168 | orchestrator | 19:10:17.850 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=b8185c25-a7a9-436c-bcf9-128152d9646e] 2025-06-22 19:10:17.869400 | orchestrator | 19:10:17.869 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-22 19:10:17.874138 | orchestrator | 19:10:17.873 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-22 19:10:17.874338 | orchestrator | 19:10:17.874 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-22 19:10:17.874850 | orchestrator | 19:10:17.874 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-22 19:10:17.874869 | orchestrator | 19:10:17.874 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-22 19:10:17.875951 | orchestrator | 19:10:17.875 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-22 19:10:18.044542 | orchestrator | 19:10:18.044 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=8c788f6a-096d-4305-af7e-6d93a0a1c20b] 2025-06-22 19:10:18.293262 | orchestrator | 19:10:18.292 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=7260a3ef-ba33-4994-87e3-6060fab53a90] 2025-06-22 19:10:18.300831 | orchestrator | 19:10:18.300 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-22 19:10:18.301678 | orchestrator | 19:10:18.301 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-22 19:10:18.303342 | orchestrator | 19:10:18.303 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-22 19:10:18.311372 | orchestrator | 19:10:18.311 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-22 19:10:18.349456 | orchestrator | 19:10:18.349 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=56728f67-13a7-4f22-af19-fc3af1bf7bc7] 2025-06-22 19:10:18.368232 | orchestrator | 19:10:18.367 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-22 19:10:18.435727 | orchestrator | 19:10:18.435 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=91c4b5e9-31be-44af-b7f8-6da04d9abec7] 2025-06-22 19:10:18.446442 | orchestrator | 19:10:18.446 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-22 19:10:18.709235 | orchestrator | 19:10:18.708 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=d03bcb64-f617-4b59-a0b5-6bb0c595e127] 2025-06-22 19:10:18.726521 | orchestrator | 19:10:18.726 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-22 19:10:18.760722 | orchestrator | 19:10:18.760 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=7e74e5d0-25f3-4911-962a-162c2e02c708] 2025-06-22 19:10:18.776036 | orchestrator | 19:10:18.775 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-22 19:10:19.066582 | orchestrator | 19:10:19.066 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=e1307a00-c4b4-455e-af94-70ecc86179ef] 2025-06-22 19:10:19.073273 | orchestrator | 19:10:19.073 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-22 19:10:19.271472 | orchestrator | 19:10:19.270 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=bd770810-1aa0-4c83-adf9-5626d71243c2] 2025-06-22 19:10:19.273451 | orchestrator | 19:10:19.272 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=1aae91eb-450f-4aa9-9d7b-e577f385cd4e] 2025-06-22 19:10:19.287883 | orchestrator | 19:10:19.287 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-22 19:10:19.434390 | orchestrator | 19:10:19.433 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=49a3f1c5-0a23-436f-81d0-8d4dc603497c] 2025-06-22 19:10:23.787297 | orchestrator | 19:10:23.786 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=e8acfea4-3197-442b-bb79-c2e5935156aa] 2025-06-22 19:10:24.066842 | orchestrator | 19:10:24.066 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=8a0b7986-218b-4e84-96f5-1cbe9de44500] 2025-06-22 19:10:24.175163 | orchestrator | 19:10:24.174 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=dca0d463-4f09-420a-a048-12573b0e3ea2] 2025-06-22 19:10:24.338108 | orchestrator | 19:10:24.337 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=1bbe0086-d5cf-4f82-a7a1-1576dd1261bb] 2025-06-22 19:10:24.413029 | orchestrator | 19:10:24.412 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=a9934e36-751f-4e4e-bb92-45fb660ceb9c] 2025-06-22 19:10:24.559468 | orchestrator | 19:10:24.559 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 7s [id=aa24d7a4-5e5d-42e9-8464-07c763efe5f5] 2025-06-22 19:10:24.835427 | orchestrator | 19:10:24.834 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=0a8eca33-b257-4cde-8abf-1cdbe1f815de] 2025-06-22 19:10:26.370183 | orchestrator | 19:10:26.369 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=ddf2ac43-7eea-4f3f-a12d-299558673c1a] 2025-06-22 19:10:26.390917 | orchestrator | 19:10:26.390 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-22 19:10:26.402573 | orchestrator | 19:10:26.402 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-22 19:10:26.418733 | orchestrator | 19:10:26.418 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-22 19:10:26.420363 | orchestrator | 19:10:26.420 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-22 19:10:26.428881 | orchestrator | 19:10:26.428 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-22 19:10:26.434482 | orchestrator | 19:10:26.434 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-22 19:10:26.436024 | orchestrator | 19:10:26.435 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-22 19:10:33.271470 | orchestrator | 19:10:33.271 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=db735c77-7a2d-4446-924f-b5171b855280] 2025-06-22 19:10:33.282222 | orchestrator | 19:10:33.281 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-22 19:10:33.287543 | orchestrator | 19:10:33.287 STDOUT terraform: local_file.inventory: Creating... 2025-06-22 19:10:33.287611 | orchestrator | 19:10:33.287 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-22 19:10:33.295558 | orchestrator | 19:10:33.295 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=b755b3894674ddc9197e20442c9951931b2dd7a3] 2025-06-22 19:10:33.296419 | orchestrator | 19:10:33.296 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=84fa8bb8673a356ef00e993c9148883749b6ed76] 2025-06-22 19:10:33.966705 | orchestrator | 19:10:33.966 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=db735c77-7a2d-4446-924f-b5171b855280] 2025-06-22 19:10:36.405825 | orchestrator | 19:10:36.405 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-22 19:10:36.420048 | orchestrator | 19:10:36.419 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-22 19:10:36.421090 | orchestrator | 19:10:36.420 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-22 19:10:36.433622 | orchestrator | 19:10:36.433 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-22 19:10:36.438709 | orchestrator | 19:10:36.438 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-22 19:10:36.439006 | orchestrator | 19:10:36.438 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-22 19:10:46.406430 | orchestrator | 19:10:46.406 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-22 19:10:46.420747 | orchestrator | 19:10:46.420 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-22 19:10:46.421726 | orchestrator | 19:10:46.421 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-22 19:10:46.434283 | orchestrator | 19:10:46.433 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-22 19:10:46.439555 | orchestrator | 19:10:46.439 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-22 19:10:46.439820 | orchestrator | 19:10:46.439 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-22 19:10:56.406514 | orchestrator | 19:10:56.406 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-06-22 19:10:56.421638 | orchestrator | 19:10:56.421 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-06-22 19:10:56.421737 | orchestrator | 19:10:56.421 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-06-22 19:10:56.434474 | orchestrator | 19:10:56.434 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-06-22 19:10:56.439509 | orchestrator | 19:10:56.439 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-22 19:10:56.440530 | orchestrator | 19:10:56.440 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-22 19:10:56.879262 | orchestrator | 19:10:56.878 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=5e8a8c65-8397-4b4f-bebe-ef9950a84c94] 2025-06-22 19:10:56.901638 | orchestrator | 19:10:56.901 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=44e9b7a2-a939-4303-86dc-e95f73992325] 2025-06-22 19:10:56.958156 | orchestrator | 19:10:56.957 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=79e3a0f1-a552-49f7-b576-1dd8cd80e572] 2025-06-22 19:10:57.083409 | orchestrator | 19:10:57.082 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=dcaecf79-4399-4954-9ade-70980c08b591] 2025-06-22 19:10:57.224041 | orchestrator | 19:10:57.223 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 31s [id=0c200cd1-66c9-4754-9922-d282e3780be5] 2025-06-22 19:10:57.231149 | orchestrator | 19:10:57.230 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=c77486f8-c67c-4657-901d-c139a513ab52] 2025-06-22 19:10:57.247957 | orchestrator | 19:10:57.247 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-22 19:10:57.254703 | orchestrator | 19:10:57.254 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=6192168704830026558] 2025-06-22 19:10:57.254909 | orchestrator | 19:10:57.254 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-22 19:10:57.257964 | orchestrator | 19:10:57.257 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-22 19:10:57.259892 | orchestrator | 19:10:57.259 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-22 19:10:57.277330 | orchestrator | 19:10:57.277 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-22 19:10:57.286759 | orchestrator | 19:10:57.286 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-22 19:10:57.287913 | orchestrator | 19:10:57.286 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-22 19:10:57.289014 | orchestrator | 19:10:57.288 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-22 19:10:57.290908 | orchestrator | 19:10:57.290 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-22 19:10:57.294706 | orchestrator | 19:10:57.294 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-22 19:10:57.296316 | orchestrator | 19:10:57.296 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-22 19:11:02.593200 | orchestrator | 19:11:02.592 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=0c200cd1-66c9-4754-9922-d282e3780be5/10758f5a-a518-4894-b68c-79c541e050d1] 2025-06-22 19:11:02.607254 | orchestrator | 19:11:02.606 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=5e8a8c65-8397-4b4f-bebe-ef9950a84c94/b3712533-4ba6-4a13-8d22-1afd9c8ce6f2] 2025-06-22 19:11:02.632816 | orchestrator | 19:11:02.631 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=dcaecf79-4399-4954-9ade-70980c08b591/f438dec5-52e6-4e07-b468-2b34fd5e0bbc] 2025-06-22 19:11:02.637301 | orchestrator | 19:11:02.636 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=5e8a8c65-8397-4b4f-bebe-ef9950a84c94/f12434e6-788f-4ffb-a434-d641146d84ae] 2025-06-22 19:11:02.650128 | orchestrator | 19:11:02.649 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=0c200cd1-66c9-4754-9922-d282e3780be5/ca04149b-3774-4fe5-a4a8-e7007e740a3b] 2025-06-22 19:11:02.661794 | orchestrator | 19:11:02.661 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=dcaecf79-4399-4954-9ade-70980c08b591/66d4c0b6-de40-44d2-a991-376660387b3d] 2025-06-22 19:11:02.684033 | orchestrator | 19:11:02.683 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 6s [id=0c200cd1-66c9-4754-9922-d282e3780be5/9d381e45-09fd-4a20-ab1c-6f33bb7ad47a] 2025-06-22 19:11:02.698309 | orchestrator | 19:11:02.697 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=5e8a8c65-8397-4b4f-bebe-ef9950a84c94/986f77d9-7eeb-491e-bdbe-4c9e8ad066d2] 2025-06-22 19:11:02.703764 | orchestrator | 19:11:02.703 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=dcaecf79-4399-4954-9ade-70980c08b591/d397d31c-b886-4607-b3cb-2d758622dade] 2025-06-22 19:11:07.301422 | orchestrator | 19:11:07.301 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-22 19:11:17.302082 | orchestrator | 19:11:17.301 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-22 19:11:17.575324 | orchestrator | 19:11:17.574 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=8ec4d552-5bb1-477f-96d4-383e390c9056] 2025-06-22 19:11:17.594803 | orchestrator | 19:11:17.594 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-22 19:11:17.594889 | orchestrator | 19:11:17.594 STDOUT terraform: Outputs: 2025-06-22 19:11:17.594932 | orchestrator | 19:11:17.594 STDOUT terraform: manager_address = 2025-06-22 19:11:17.594946 | orchestrator | 19:11:17.594 STDOUT terraform: private_key = 2025-06-22 19:11:18.037579 | orchestrator | ok: Runtime: 0:01:35.713253 2025-06-22 19:11:18.077516 | 2025-06-22 19:11:18.077729 | TASK [Create infrastructure (stable)] 2025-06-22 19:11:18.626498 | orchestrator | skipping: Conditional result was False 2025-06-22 19:11:18.644535 | 2025-06-22 19:11:18.644708 | TASK [Fetch manager address] 2025-06-22 19:11:19.101632 | orchestrator | ok 2025-06-22 19:11:19.111851 | 2025-06-22 19:11:19.111978 | TASK [Set manager_host address] 2025-06-22 19:11:19.190005 | orchestrator | ok 2025-06-22 19:11:19.208871 | 2025-06-22 19:11:19.209035 | LOOP [Update ansible collections] 2025-06-22 19:11:26.497725 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 19:11:26.497941 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-22 19:11:26.497977 | orchestrator | Starting galaxy collection install process 2025-06-22 19:11:26.498002 | orchestrator | Process install dependency map 2025-06-22 19:11:26.498031 | orchestrator | Starting collection install process 2025-06-22 19:11:26.498053 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2025-06-22 19:11:26.498087 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2025-06-22 19:11:26.498119 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-22 19:11:26.498167 | orchestrator | ok: Item: commons Runtime: 0:00:06.963089 2025-06-22 19:11:31.385657 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-22 19:11:31.385787 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 19:11:31.385820 | orchestrator | Starting galaxy collection install process 2025-06-22 19:11:31.385843 | orchestrator | Process install dependency map 2025-06-22 19:11:31.385865 | orchestrator | Starting collection install process 2025-06-22 19:11:31.385885 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2025-06-22 19:11:31.385905 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2025-06-22 19:11:31.385926 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-22 19:11:31.385959 | orchestrator | ok: Item: services Runtime: 0:00:04.620095 2025-06-22 19:11:31.406326 | 2025-06-22 19:11:31.406461 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-22 19:11:41.988681 | orchestrator | ok 2025-06-22 19:11:41.998617 | 2025-06-22 19:11:41.998732 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-22 19:12:42.029270 | orchestrator | ok 2025-06-22 19:12:42.041178 | 2025-06-22 19:12:42.041300 | TASK [Fetch manager ssh hostkey] 2025-06-22 19:12:43.618720 | orchestrator | Output suppressed because no_log was given 2025-06-22 19:12:43.626258 | 2025-06-22 19:12:43.626384 | TASK [Get ssh keypair from terraform environment] 2025-06-22 19:12:44.163593 | orchestrator | ok: Runtime: 0:00:00.005337 2025-06-22 19:12:44.181757 | 2025-06-22 19:12:44.181908 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-22 19:12:44.221900 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-22 19:12:44.232115 | 2025-06-22 19:12:44.232250 | TASK [Run manager part 0] 2025-06-22 19:12:46.942914 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 19:12:47.067134 | orchestrator | 2025-06-22 19:12:47.067205 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-22 19:12:47.067220 | orchestrator | 2025-06-22 19:12:47.067278 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-22 19:12:48.655922 | orchestrator | ok: [testbed-manager] 2025-06-22 19:12:48.656008 | orchestrator | 2025-06-22 19:12:48.656056 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-22 19:12:48.656078 | orchestrator | 2025-06-22 19:12:48.656098 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:12:50.830480 | orchestrator | ok: [testbed-manager] 2025-06-22 19:12:50.830536 | orchestrator | 2025-06-22 19:12:50.830544 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-22 19:12:51.486281 | orchestrator | ok: [testbed-manager] 2025-06-22 19:12:51.486353 | orchestrator | 2025-06-22 19:12:51.486366 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-22 19:12:51.543938 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:51.543989 | orchestrator | 2025-06-22 19:12:51.544001 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-22 19:12:51.585002 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:51.585054 | orchestrator | 2025-06-22 19:12:51.585063 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-22 19:12:51.610706 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:51.610748 | orchestrator | 2025-06-22 19:12:51.610754 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-22 19:12:51.648514 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:51.648569 | orchestrator | 2025-06-22 19:12:51.648577 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-22 19:12:51.677726 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:51.677772 | orchestrator | 2025-06-22 19:12:51.677780 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-22 19:12:51.706264 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:51.706311 | orchestrator | 2025-06-22 19:12:51.706319 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-22 19:12:51.732668 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:51.732707 | orchestrator | 2025-06-22 19:12:51.732714 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-22 19:12:52.518583 | orchestrator | changed: [testbed-manager] 2025-06-22 19:12:52.518661 | orchestrator | 2025-06-22 19:12:52.518677 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-22 19:16:00.420886 | orchestrator | changed: [testbed-manager] 2025-06-22 19:16:00.421101 | orchestrator | 2025-06-22 19:16:00.421132 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-22 19:17:15.142381 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:15.142478 | orchestrator | 2025-06-22 19:17:15.142492 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-22 19:17:34.875546 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:34.875636 | orchestrator | 2025-06-22 19:17:34.875655 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-22 19:17:43.308614 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:43.308706 | orchestrator | 2025-06-22 19:17:43.308723 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-22 19:17:43.356629 | orchestrator | ok: [testbed-manager] 2025-06-22 19:17:43.356691 | orchestrator | 2025-06-22 19:17:43.356699 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-22 19:17:44.171128 | orchestrator | ok: [testbed-manager] 2025-06-22 19:17:44.171205 | orchestrator | 2025-06-22 19:17:44.171218 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-22 19:17:44.921222 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:44.921265 | orchestrator | 2025-06-22 19:17:44.921274 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-22 19:17:51.350149 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:51.350188 | orchestrator | 2025-06-22 19:17:51.350210 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-22 19:17:57.427823 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:57.427915 | orchestrator | 2025-06-22 19:17:57.427934 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-22 19:18:00.215316 | orchestrator | changed: [testbed-manager] 2025-06-22 19:18:00.215440 | orchestrator | 2025-06-22 19:18:00.215459 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-22 19:18:02.078149 | orchestrator | changed: [testbed-manager] 2025-06-22 19:18:02.078232 | orchestrator | 2025-06-22 19:18:02.078248 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-22 19:18:03.232722 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-22 19:18:03.232784 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-22 19:18:03.232794 | orchestrator | 2025-06-22 19:18:03.232802 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-22 19:18:03.275919 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-22 19:18:03.275981 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-22 19:18:03.275991 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-22 19:18:03.276000 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-22 19:18:14.358786 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-22 19:18:14.358883 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-22 19:18:14.358898 | orchestrator | 2025-06-22 19:18:14.358911 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-22 19:18:14.945557 | orchestrator | changed: [testbed-manager] 2025-06-22 19:18:14.945614 | orchestrator | 2025-06-22 19:18:14.945623 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-22 19:21:40.665040 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-22 19:21:40.665116 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-22 19:21:40.665129 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-22 19:21:40.665139 | orchestrator | 2025-06-22 19:21:40.665148 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-22 19:21:43.052883 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-22 19:21:43.052946 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-22 19:21:43.052960 | orchestrator | 2025-06-22 19:21:43.052972 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-22 19:21:43.052984 | orchestrator | 2025-06-22 19:21:43.052995 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:21:44.512784 | orchestrator | ok: [testbed-manager] 2025-06-22 19:21:44.512867 | orchestrator | 2025-06-22 19:21:44.512885 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-22 19:21:44.569160 | orchestrator | ok: [testbed-manager] 2025-06-22 19:21:44.569215 | orchestrator | 2025-06-22 19:21:44.569225 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-22 19:21:44.654574 | orchestrator | ok: [testbed-manager] 2025-06-22 19:21:44.654648 | orchestrator | 2025-06-22 19:21:44.654663 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-22 19:21:45.461923 | orchestrator | changed: [testbed-manager] 2025-06-22 19:21:45.461963 | orchestrator | 2025-06-22 19:21:45.461970 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-22 19:21:46.178641 | orchestrator | changed: [testbed-manager] 2025-06-22 19:21:46.178709 | orchestrator | 2025-06-22 19:21:46.178726 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-22 19:21:47.576159 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-22 19:21:47.576244 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-22 19:21:47.576258 | orchestrator | 2025-06-22 19:21:47.576285 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-22 19:21:48.913282 | orchestrator | changed: [testbed-manager] 2025-06-22 19:21:48.913371 | orchestrator | 2025-06-22 19:21:48.913386 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-22 19:21:50.664306 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:21:50.664376 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-22 19:21:50.664387 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:21:50.664396 | orchestrator | 2025-06-22 19:21:50.664406 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-06-22 19:21:50.721581 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:21:50.722235 | orchestrator | 2025-06-22 19:21:50.722252 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-22 19:21:51.436974 | orchestrator | changed: [testbed-manager] 2025-06-22 19:21:51.437061 | orchestrator | 2025-06-22 19:21:51.437078 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-22 19:21:51.511700 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:21:51.511751 | orchestrator | 2025-06-22 19:21:51.511757 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-22 19:21:52.369186 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:21:52.369247 | orchestrator | changed: [testbed-manager] 2025-06-22 19:21:52.369256 | orchestrator | 2025-06-22 19:21:52.369264 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-22 19:21:52.408674 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:21:52.408732 | orchestrator | 2025-06-22 19:21:52.408742 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-22 19:21:52.439191 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:21:52.439231 | orchestrator | 2025-06-22 19:21:52.439237 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-22 19:21:52.465836 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:21:52.465881 | orchestrator | 2025-06-22 19:21:52.465889 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-22 19:21:52.508802 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:21:52.508847 | orchestrator | 2025-06-22 19:21:52.508857 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-22 19:21:53.215658 | orchestrator | ok: [testbed-manager] 2025-06-22 19:21:53.215736 | orchestrator | 2025-06-22 19:21:53.215748 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-22 19:21:53.215758 | orchestrator | 2025-06-22 19:21:53.215766 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:21:54.604429 | orchestrator | ok: [testbed-manager] 2025-06-22 19:21:54.604534 | orchestrator | 2025-06-22 19:21:54.604551 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-22 19:21:55.566778 | orchestrator | changed: [testbed-manager] 2025-06-22 19:21:55.566863 | orchestrator | 2025-06-22 19:21:55.566880 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:21:55.566893 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-06-22 19:21:55.566905 | orchestrator | 2025-06-22 19:21:56.088409 | orchestrator | ok: Runtime: 0:09:11.157055 2025-06-22 19:21:56.107028 | 2025-06-22 19:21:56.107193 | TASK [Point out that the log in on the manager is now possible] 2025-06-22 19:21:56.141196 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-22 19:21:56.152176 | 2025-06-22 19:21:56.152314 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-22 19:21:56.190322 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-22 19:21:56.200329 | 2025-06-22 19:21:56.200460 | TASK [Run manager part 1 + 2] 2025-06-22 19:21:57.051250 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 19:21:57.105654 | orchestrator | 2025-06-22 19:21:57.105706 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-22 19:21:57.105714 | orchestrator | 2025-06-22 19:21:57.105728 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:22:00.116868 | orchestrator | ok: [testbed-manager] 2025-06-22 19:22:00.116962 | orchestrator | 2025-06-22 19:22:00.117015 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-22 19:22:00.157547 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:22:00.157627 | orchestrator | 2025-06-22 19:22:00.157645 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-22 19:22:00.200540 | orchestrator | ok: [testbed-manager] 2025-06-22 19:22:00.200590 | orchestrator | 2025-06-22 19:22:00.200599 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-22 19:22:00.237419 | orchestrator | ok: [testbed-manager] 2025-06-22 19:22:00.237544 | orchestrator | 2025-06-22 19:22:00.237563 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-22 19:22:00.298141 | orchestrator | ok: [testbed-manager] 2025-06-22 19:22:00.298225 | orchestrator | 2025-06-22 19:22:00.298242 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-22 19:22:00.363174 | orchestrator | ok: [testbed-manager] 2025-06-22 19:22:00.363251 | orchestrator | 2025-06-22 19:22:00.363267 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-22 19:22:00.420703 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-22 19:22:00.420789 | orchestrator | 2025-06-22 19:22:00.420806 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-22 19:22:01.163006 | orchestrator | ok: [testbed-manager] 2025-06-22 19:22:01.163097 | orchestrator | 2025-06-22 19:22:01.163114 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-22 19:22:01.215620 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:22:01.215698 | orchestrator | 2025-06-22 19:22:01.215712 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-22 19:22:02.603847 | orchestrator | changed: [testbed-manager] 2025-06-22 19:22:02.603949 | orchestrator | 2025-06-22 19:22:02.603967 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-22 19:22:03.177725 | orchestrator | ok: [testbed-manager] 2025-06-22 19:22:03.177814 | orchestrator | 2025-06-22 19:22:03.177830 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-22 19:22:04.336093 | orchestrator | changed: [testbed-manager] 2025-06-22 19:22:04.336176 | orchestrator | 2025-06-22 19:22:04.336193 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-22 19:22:17.270476 | orchestrator | changed: [testbed-manager] 2025-06-22 19:22:17.270573 | orchestrator | 2025-06-22 19:22:17.270591 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-22 19:22:17.939056 | orchestrator | ok: [testbed-manager] 2025-06-22 19:22:17.939136 | orchestrator | 2025-06-22 19:22:17.939154 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-22 19:22:17.991815 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:22:17.991889 | orchestrator | 2025-06-22 19:22:17.991904 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-22 19:22:18.949538 | orchestrator | changed: [testbed-manager] 2025-06-22 19:22:18.949616 | orchestrator | 2025-06-22 19:22:18.949631 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-22 19:22:19.929077 | orchestrator | changed: [testbed-manager] 2025-06-22 19:22:19.929175 | orchestrator | 2025-06-22 19:22:19.929192 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-22 19:22:20.504834 | orchestrator | changed: [testbed-manager] 2025-06-22 19:22:20.504920 | orchestrator | 2025-06-22 19:22:20.504936 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-22 19:22:20.546119 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-22 19:22:20.546228 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-22 19:22:20.546246 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-22 19:22:20.546258 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-22 19:22:27.563690 | orchestrator | changed: [testbed-manager] 2025-06-22 19:22:27.563765 | orchestrator | 2025-06-22 19:22:27.563775 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-22 19:22:36.587742 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-22 19:22:36.587788 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-22 19:22:36.587798 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-22 19:22:36.587805 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-22 19:22:36.587816 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-22 19:22:36.587823 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-22 19:22:36.587830 | orchestrator | 2025-06-22 19:22:36.587837 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-22 19:22:37.634130 | orchestrator | changed: [testbed-manager] 2025-06-22 19:22:37.634795 | orchestrator | 2025-06-22 19:22:37.634820 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-22 19:22:37.671881 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:22:37.671919 | orchestrator | 2025-06-22 19:22:37.671925 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-22 19:22:40.774223 | orchestrator | changed: [testbed-manager] 2025-06-22 19:22:40.774276 | orchestrator | 2025-06-22 19:22:40.774283 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-22 19:22:40.815248 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:22:40.815345 | orchestrator | 2025-06-22 19:22:40.815364 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-22 19:24:13.775587 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:13.775654 | orchestrator | 2025-06-22 19:24:13.775672 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-22 19:24:14.911472 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:14.911524 | orchestrator | 2025-06-22 19:24:14.911532 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:24:14.911539 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-22 19:24:14.911544 | orchestrator | 2025-06-22 19:24:15.333954 | orchestrator | ok: Runtime: 0:02:18.491390 2025-06-22 19:24:15.350650 | 2025-06-22 19:24:15.350807 | TASK [Reboot manager] 2025-06-22 19:24:16.909593 | orchestrator | ok: Runtime: 0:00:00.935930 2025-06-22 19:24:16.925948 | 2025-06-22 19:24:16.926093 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-22 19:24:31.094415 | orchestrator | ok 2025-06-22 19:24:31.106110 | 2025-06-22 19:24:31.106263 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-22 19:25:31.155514 | orchestrator | ok 2025-06-22 19:25:31.168394 | 2025-06-22 19:25:31.168539 | TASK [Deploy manager + bootstrap nodes] 2025-06-22 19:25:33.637582 | orchestrator | 2025-06-22 19:25:33.637814 | orchestrator | # DEPLOY MANAGER 2025-06-22 19:25:33.637842 | orchestrator | 2025-06-22 19:25:33.637864 | orchestrator | + set -e 2025-06-22 19:25:33.637885 | orchestrator | + echo 2025-06-22 19:25:33.637899 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-22 19:25:33.637917 | orchestrator | + echo 2025-06-22 19:25:33.637968 | orchestrator | + cat /opt/manager-vars.sh 2025-06-22 19:25:33.640945 | orchestrator | export NUMBER_OF_NODES=6 2025-06-22 19:25:33.640978 | orchestrator | 2025-06-22 19:25:33.640990 | orchestrator | export CEPH_VERSION=reef 2025-06-22 19:25:33.641003 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-22 19:25:33.641015 | orchestrator | export MANAGER_VERSION=latest 2025-06-22 19:25:33.641037 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-22 19:25:33.641048 | orchestrator | 2025-06-22 19:25:33.641066 | orchestrator | export ARA=false 2025-06-22 19:25:33.641077 | orchestrator | export DEPLOY_MODE=manager 2025-06-22 19:25:33.641095 | orchestrator | export TEMPEST=false 2025-06-22 19:25:33.641106 | orchestrator | export IS_ZUUL=true 2025-06-22 19:25:33.641117 | orchestrator | 2025-06-22 19:25:33.641135 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.19 2025-06-22 19:25:33.641147 | orchestrator | export EXTERNAL_API=false 2025-06-22 19:25:33.641157 | orchestrator | 2025-06-22 19:25:33.641168 | orchestrator | export IMAGE_USER=ubuntu 2025-06-22 19:25:33.641182 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-22 19:25:33.641193 | orchestrator | 2025-06-22 19:25:33.641204 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-22 19:25:33.641221 | orchestrator | 2025-06-22 19:25:33.641232 | orchestrator | + echo 2025-06-22 19:25:33.641245 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 19:25:33.641786 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 19:25:33.641803 | orchestrator | ++ INTERACTIVE=false 2025-06-22 19:25:33.641814 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 19:25:33.641826 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 19:25:33.641929 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 19:25:33.641952 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 19:25:33.641964 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 19:25:33.641980 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 19:25:33.641991 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 19:25:33.642002 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 19:25:33.642014 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 19:25:33.642066 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-22 19:25:33.642077 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-22 19:25:33.642088 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 19:25:33.642109 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 19:25:33.642121 | orchestrator | ++ export ARA=false 2025-06-22 19:25:33.642132 | orchestrator | ++ ARA=false 2025-06-22 19:25:33.642148 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 19:25:33.642159 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 19:25:33.642170 | orchestrator | ++ export TEMPEST=false 2025-06-22 19:25:33.642180 | orchestrator | ++ TEMPEST=false 2025-06-22 19:25:33.642192 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 19:25:33.642203 | orchestrator | ++ IS_ZUUL=true 2025-06-22 19:25:33.642213 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.19 2025-06-22 19:25:33.642224 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.19 2025-06-22 19:25:33.642236 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 19:25:33.642247 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 19:25:33.642258 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 19:25:33.642269 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 19:25:33.642280 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 19:25:33.642291 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 19:25:33.642303 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 19:25:33.642314 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 19:25:33.642325 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-22 19:25:33.693422 | orchestrator | + docker version 2025-06-22 19:25:33.940965 | orchestrator | Client: Docker Engine - Community 2025-06-22 19:25:33.941163 | orchestrator | Version: 27.5.1 2025-06-22 19:25:33.941184 | orchestrator | API version: 1.47 2025-06-22 19:25:33.941196 | orchestrator | Go version: go1.22.11 2025-06-22 19:25:33.941207 | orchestrator | Git commit: 9f9e405 2025-06-22 19:25:33.941218 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-22 19:25:33.941230 | orchestrator | OS/Arch: linux/amd64 2025-06-22 19:25:33.941240 | orchestrator | Context: default 2025-06-22 19:25:33.941251 | orchestrator | 2025-06-22 19:25:33.941263 | orchestrator | Server: Docker Engine - Community 2025-06-22 19:25:33.941273 | orchestrator | Engine: 2025-06-22 19:25:33.941285 | orchestrator | Version: 27.5.1 2025-06-22 19:25:33.941296 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-22 19:25:33.941339 | orchestrator | Go version: go1.22.11 2025-06-22 19:25:33.941351 | orchestrator | Git commit: 4c9b3b0 2025-06-22 19:25:33.941362 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-22 19:25:33.941373 | orchestrator | OS/Arch: linux/amd64 2025-06-22 19:25:33.941383 | orchestrator | Experimental: false 2025-06-22 19:25:33.941394 | orchestrator | containerd: 2025-06-22 19:25:33.941405 | orchestrator | Version: 1.7.27 2025-06-22 19:25:33.941416 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-22 19:25:33.941427 | orchestrator | runc: 2025-06-22 19:25:33.941438 | orchestrator | Version: 1.2.5 2025-06-22 19:25:33.941483 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-22 19:25:33.941497 | orchestrator | docker-init: 2025-06-22 19:25:33.941537 | orchestrator | Version: 0.19.0 2025-06-22 19:25:33.941549 | orchestrator | GitCommit: de40ad0 2025-06-22 19:25:33.944722 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-22 19:25:33.953132 | orchestrator | + set -e 2025-06-22 19:25:33.953166 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 19:25:33.953174 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 19:25:33.953181 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 19:25:33.953187 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 19:25:33.953194 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 19:25:33.953200 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 19:25:33.953207 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 19:25:33.953213 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-22 19:25:33.953225 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-22 19:25:33.953232 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 19:25:33.953238 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 19:25:33.953245 | orchestrator | ++ export ARA=false 2025-06-22 19:25:33.953251 | orchestrator | ++ ARA=false 2025-06-22 19:25:33.953257 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 19:25:33.953263 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 19:25:33.953274 | orchestrator | ++ export TEMPEST=false 2025-06-22 19:25:33.953280 | orchestrator | ++ TEMPEST=false 2025-06-22 19:25:33.953287 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 19:25:33.953293 | orchestrator | ++ IS_ZUUL=true 2025-06-22 19:25:33.953299 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.19 2025-06-22 19:25:33.953305 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.19 2025-06-22 19:25:33.953312 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 19:25:33.953318 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 19:25:33.953324 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 19:25:33.953330 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 19:25:33.953337 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 19:25:33.953343 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 19:25:33.953349 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 19:25:33.953355 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 19:25:33.953361 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 19:25:33.953367 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 19:25:33.953374 | orchestrator | ++ INTERACTIVE=false 2025-06-22 19:25:33.953380 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 19:25:33.953390 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 19:25:33.953400 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-22 19:25:33.953407 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-22 19:25:33.953413 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-06-22 19:25:33.958628 | orchestrator | + set -e 2025-06-22 19:25:33.958649 | orchestrator | + VERSION=reef 2025-06-22 19:25:33.959802 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-22 19:25:33.963352 | orchestrator | + [[ -n ceph_version: reef ]] 2025-06-22 19:25:33.963371 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-06-22 19:25:33.968663 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-06-22 19:25:33.974625 | orchestrator | + set -e 2025-06-22 19:25:33.974645 | orchestrator | + VERSION=2024.2 2025-06-22 19:25:33.975771 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-22 19:25:33.977259 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-06-22 19:25:33.977273 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-06-22 19:25:33.982818 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-22 19:25:33.983762 | orchestrator | ++ semver latest 7.0.0 2025-06-22 19:25:34.044018 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-22 19:25:34.044100 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-22 19:25:34.044110 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-22 19:25:34.044118 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-22 19:25:34.133722 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 19:25:34.136329 | orchestrator | + source /opt/venv/bin/activate 2025-06-22 19:25:34.138862 | orchestrator | ++ deactivate nondestructive 2025-06-22 19:25:34.138980 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:25:34.138995 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:25:34.139008 | orchestrator | ++ hash -r 2025-06-22 19:25:34.139019 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:25:34.139030 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-22 19:25:34.139040 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-22 19:25:34.139051 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-22 19:25:34.139082 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-22 19:25:34.139098 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-22 19:25:34.139109 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-22 19:25:34.139120 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-22 19:25:34.139132 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:25:34.139150 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:25:34.139161 | orchestrator | ++ export PATH 2025-06-22 19:25:34.139172 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:25:34.139193 | orchestrator | ++ '[' -z '' ']' 2025-06-22 19:25:34.139204 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-22 19:25:34.139215 | orchestrator | ++ PS1='(venv) ' 2025-06-22 19:25:34.139225 | orchestrator | ++ export PS1 2025-06-22 19:25:34.139236 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-22 19:25:34.139247 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-22 19:25:34.139257 | orchestrator | ++ hash -r 2025-06-22 19:25:34.139299 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-22 19:25:35.485958 | orchestrator | 2025-06-22 19:25:35.486116 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-22 19:25:35.486134 | orchestrator | 2025-06-22 19:25:35.486146 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-22 19:25:36.043484 | orchestrator | ok: [testbed-manager] 2025-06-22 19:25:36.043611 | orchestrator | 2025-06-22 19:25:36.043625 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-22 19:25:37.006146 | orchestrator | changed: [testbed-manager] 2025-06-22 19:25:37.006244 | orchestrator | 2025-06-22 19:25:37.006259 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-22 19:25:37.006268 | orchestrator | 2025-06-22 19:25:37.006273 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:25:39.463653 | orchestrator | ok: [testbed-manager] 2025-06-22 19:25:39.463768 | orchestrator | 2025-06-22 19:25:39.463784 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-22 19:25:39.517060 | orchestrator | ok: [testbed-manager] 2025-06-22 19:25:39.517158 | orchestrator | 2025-06-22 19:25:39.517176 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-22 19:25:40.003237 | orchestrator | changed: [testbed-manager] 2025-06-22 19:25:40.003372 | orchestrator | 2025-06-22 19:25:40.003400 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-22 19:25:40.047447 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:25:40.047561 | orchestrator | 2025-06-22 19:25:40.047572 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-22 19:25:40.399939 | orchestrator | changed: [testbed-manager] 2025-06-22 19:25:40.400045 | orchestrator | 2025-06-22 19:25:40.400062 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-22 19:25:40.462480 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:25:40.462589 | orchestrator | 2025-06-22 19:25:40.462601 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-22 19:25:40.811339 | orchestrator | ok: [testbed-manager] 2025-06-22 19:25:40.811449 | orchestrator | 2025-06-22 19:25:40.811466 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-22 19:25:40.933269 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:25:40.933367 | orchestrator | 2025-06-22 19:25:40.933379 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-22 19:25:40.933389 | orchestrator | 2025-06-22 19:25:40.933399 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:25:42.767367 | orchestrator | ok: [testbed-manager] 2025-06-22 19:25:42.767460 | orchestrator | 2025-06-22 19:25:42.767477 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-22 19:25:42.856987 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-22 19:25:42.857080 | orchestrator | 2025-06-22 19:25:42.857097 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-22 19:25:42.906928 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-22 19:25:42.907007 | orchestrator | 2025-06-22 19:25:42.907021 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-22 19:25:43.894067 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-22 19:25:43.894157 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-22 19:25:43.894171 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-22 19:25:43.894183 | orchestrator | 2025-06-22 19:25:43.894195 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-22 19:25:45.536756 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-22 19:25:45.536806 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-22 19:25:45.536813 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-22 19:25:45.536817 | orchestrator | 2025-06-22 19:25:45.536822 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-22 19:25:46.130722 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:25:46.130804 | orchestrator | changed: [testbed-manager] 2025-06-22 19:25:46.130818 | orchestrator | 2025-06-22 19:25:46.130830 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-22 19:25:46.722864 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:25:46.722975 | orchestrator | changed: [testbed-manager] 2025-06-22 19:25:46.723001 | orchestrator | 2025-06-22 19:25:46.723022 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-22 19:25:46.779454 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:25:46.779591 | orchestrator | 2025-06-22 19:25:46.779610 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-22 19:25:47.086699 | orchestrator | ok: [testbed-manager] 2025-06-22 19:25:47.086786 | orchestrator | 2025-06-22 19:25:47.086801 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-22 19:25:47.157838 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-22 19:25:47.157897 | orchestrator | 2025-06-22 19:25:47.157905 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-22 19:25:48.118687 | orchestrator | changed: [testbed-manager] 2025-06-22 19:25:48.118773 | orchestrator | 2025-06-22 19:25:48.118788 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-22 19:25:48.840848 | orchestrator | changed: [testbed-manager] 2025-06-22 19:25:48.840921 | orchestrator | 2025-06-22 19:25:48.840933 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-22 19:26:00.029095 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:00.029242 | orchestrator | 2025-06-22 19:26:00.029261 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-22 19:26:00.073561 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:26:00.073704 | orchestrator | 2025-06-22 19:26:00.073719 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-22 19:26:00.073731 | orchestrator | 2025-06-22 19:26:00.073742 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:26:01.919886 | orchestrator | ok: [testbed-manager] 2025-06-22 19:26:01.919995 | orchestrator | 2025-06-22 19:26:01.920040 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-22 19:26:02.030961 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-22 19:26:02.031049 | orchestrator | 2025-06-22 19:26:02.031063 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-22 19:26:02.088923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:26:02.089044 | orchestrator | 2025-06-22 19:26:02.089060 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-22 19:26:04.540407 | orchestrator | ok: [testbed-manager] 2025-06-22 19:26:04.540582 | orchestrator | 2025-06-22 19:26:04.540603 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-22 19:26:04.587922 | orchestrator | ok: [testbed-manager] 2025-06-22 19:26:04.588014 | orchestrator | 2025-06-22 19:26:04.588031 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-22 19:26:04.706343 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-22 19:26:04.706426 | orchestrator | 2025-06-22 19:26:04.706440 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-22 19:26:07.487605 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-22 19:26:07.487702 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-22 19:26:07.487715 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-22 19:26:07.487727 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-22 19:26:07.487738 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-22 19:26:07.487748 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-22 19:26:07.487759 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-22 19:26:07.487770 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-22 19:26:07.487781 | orchestrator | 2025-06-22 19:26:07.487793 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-22 19:26:08.107689 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:08.107798 | orchestrator | 2025-06-22 19:26:08.107815 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-22 19:26:08.735117 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:08.735226 | orchestrator | 2025-06-22 19:26:08.735242 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-22 19:26:08.812958 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-22 19:26:08.813057 | orchestrator | 2025-06-22 19:26:08.813073 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-22 19:26:10.041417 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-22 19:26:10.041567 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-22 19:26:10.041585 | orchestrator | 2025-06-22 19:26:10.041598 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-22 19:26:10.677966 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:10.678124 | orchestrator | 2025-06-22 19:26:10.678142 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-22 19:26:10.743697 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:26:10.743812 | orchestrator | 2025-06-22 19:26:10.743828 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-22 19:26:10.793836 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-22 19:26:10.793927 | orchestrator | 2025-06-22 19:26:10.793942 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-22 19:26:12.218717 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:26:12.218830 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:26:12.218845 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:12.218857 | orchestrator | 2025-06-22 19:26:12.218870 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-22 19:26:12.854324 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:12.854425 | orchestrator | 2025-06-22 19:26:12.854440 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-22 19:26:12.907925 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:26:12.908029 | orchestrator | 2025-06-22 19:26:12.908045 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-22 19:26:13.017939 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-22 19:26:13.018112 | orchestrator | 2025-06-22 19:26:13.018138 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-22 19:26:13.547010 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:13.547114 | orchestrator | 2025-06-22 19:26:13.547129 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-22 19:26:13.946594 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:13.946684 | orchestrator | 2025-06-22 19:26:13.946699 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-22 19:26:15.073971 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-22 19:26:15.074114 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-22 19:26:15.074131 | orchestrator | 2025-06-22 19:26:15.074143 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-22 19:26:15.650587 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:15.650678 | orchestrator | 2025-06-22 19:26:15.650692 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-22 19:26:16.025462 | orchestrator | ok: [testbed-manager] 2025-06-22 19:26:16.025590 | orchestrator | 2025-06-22 19:26:16.025607 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-22 19:26:16.372954 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:16.373057 | orchestrator | 2025-06-22 19:26:16.373084 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-22 19:26:16.407955 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:26:16.408030 | orchestrator | 2025-06-22 19:26:16.408044 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-22 19:26:16.468735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-22 19:26:16.468814 | orchestrator | 2025-06-22 19:26:16.468828 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-22 19:26:16.503500 | orchestrator | ok: [testbed-manager] 2025-06-22 19:26:16.503597 | orchestrator | 2025-06-22 19:26:16.503611 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-22 19:26:18.345299 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-22 19:26:18.345394 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-22 19:26:18.345409 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-22 19:26:18.345421 | orchestrator | 2025-06-22 19:26:18.345433 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-22 19:26:18.972029 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:18.972112 | orchestrator | 2025-06-22 19:26:18.972127 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-22 19:26:19.671387 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:19.671486 | orchestrator | 2025-06-22 19:26:19.671501 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-22 19:26:20.404318 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:20.404391 | orchestrator | 2025-06-22 19:26:20.404398 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-22 19:26:20.470460 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-22 19:26:20.470623 | orchestrator | 2025-06-22 19:26:20.470638 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-22 19:26:20.514000 | orchestrator | ok: [testbed-manager] 2025-06-22 19:26:20.514117 | orchestrator | 2025-06-22 19:26:20.514128 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-22 19:26:21.230167 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-22 19:26:21.230270 | orchestrator | 2025-06-22 19:26:21.230284 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-22 19:26:21.306848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-22 19:26:21.306968 | orchestrator | 2025-06-22 19:26:21.306984 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-22 19:26:22.033263 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:22.033370 | orchestrator | 2025-06-22 19:26:22.033387 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-22 19:26:22.645639 | orchestrator | ok: [testbed-manager] 2025-06-22 19:26:22.645752 | orchestrator | 2025-06-22 19:26:22.645777 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-22 19:26:22.687194 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:26:22.687300 | orchestrator | 2025-06-22 19:26:22.687317 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-22 19:26:22.753455 | orchestrator | ok: [testbed-manager] 2025-06-22 19:26:22.753643 | orchestrator | 2025-06-22 19:26:22.753662 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-22 19:26:23.607749 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:23.607882 | orchestrator | 2025-06-22 19:26:23.607903 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-22 19:27:27.052162 | orchestrator | changed: [testbed-manager] 2025-06-22 19:27:27.052294 | orchestrator | 2025-06-22 19:27:27.052311 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-22 19:27:28.008461 | orchestrator | ok: [testbed-manager] 2025-06-22 19:27:28.008593 | orchestrator | 2025-06-22 19:27:28.008610 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-22 19:27:28.066359 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:27:28.066465 | orchestrator | 2025-06-22 19:27:28.066487 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-22 19:27:30.428025 | orchestrator | changed: [testbed-manager] 2025-06-22 19:27:30.428136 | orchestrator | 2025-06-22 19:27:30.428155 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-22 19:27:30.473821 | orchestrator | ok: [testbed-manager] 2025-06-22 19:27:30.473915 | orchestrator | 2025-06-22 19:27:30.473929 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-22 19:27:30.473941 | orchestrator | 2025-06-22 19:27:30.473953 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-22 19:27:30.519776 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:27:30.519866 | orchestrator | 2025-06-22 19:27:30.519880 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-22 19:28:30.570394 | orchestrator | Pausing for 60 seconds 2025-06-22 19:28:30.570533 | orchestrator | changed: [testbed-manager] 2025-06-22 19:28:30.570610 | orchestrator | 2025-06-22 19:28:30.570635 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-22 19:28:34.692402 | orchestrator | changed: [testbed-manager] 2025-06-22 19:28:34.692539 | orchestrator | 2025-06-22 19:28:34.692620 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-22 19:29:16.301691 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-22 19:29:16.301811 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-22 19:29:16.301827 | orchestrator | changed: [testbed-manager] 2025-06-22 19:29:16.301841 | orchestrator | 2025-06-22 19:29:16.301853 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-22 19:29:25.352023 | orchestrator | changed: [testbed-manager] 2025-06-22 19:29:25.352156 | orchestrator | 2025-06-22 19:29:25.352174 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-22 19:29:25.438900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-22 19:29:25.439030 | orchestrator | 2025-06-22 19:29:25.439046 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-22 19:29:25.439058 | orchestrator | 2025-06-22 19:29:25.439069 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-22 19:29:25.498882 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:29:25.499049 | orchestrator | 2025-06-22 19:29:25.499065 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:29:25.499078 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-22 19:29:25.499089 | orchestrator | 2025-06-22 19:29:25.595129 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 19:29:25.595277 | orchestrator | + deactivate 2025-06-22 19:29:25.595304 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-22 19:29:25.595319 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:29:25.595329 | orchestrator | + export PATH 2025-06-22 19:29:25.595340 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-22 19:29:25.595351 | orchestrator | + '[' -n '' ']' 2025-06-22 19:29:25.595362 | orchestrator | + hash -r 2025-06-22 19:29:25.595372 | orchestrator | + '[' -n '' ']' 2025-06-22 19:29:25.595383 | orchestrator | + unset VIRTUAL_ENV 2025-06-22 19:29:25.595393 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-22 19:29:25.595427 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-22 19:29:25.595438 | orchestrator | + unset -f deactivate 2025-06-22 19:29:25.595449 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-22 19:29:25.600277 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-22 19:29:25.600318 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-22 19:29:25.600329 | orchestrator | + local max_attempts=60 2025-06-22 19:29:25.600340 | orchestrator | + local name=ceph-ansible 2025-06-22 19:29:25.600351 | orchestrator | + local attempt_num=1 2025-06-22 19:29:25.600939 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:29:25.639875 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:29:25.639941 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-22 19:29:25.639952 | orchestrator | + local max_attempts=60 2025-06-22 19:29:25.639963 | orchestrator | + local name=kolla-ansible 2025-06-22 19:29:25.639974 | orchestrator | + local attempt_num=1 2025-06-22 19:29:25.640723 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-22 19:29:25.676451 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:29:25.676548 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-22 19:29:25.676604 | orchestrator | + local max_attempts=60 2025-06-22 19:29:25.676619 | orchestrator | + local name=osism-ansible 2025-06-22 19:29:25.676630 | orchestrator | + local attempt_num=1 2025-06-22 19:29:25.677261 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-22 19:29:25.708436 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:29:25.708513 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-22 19:29:25.708526 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-22 19:29:26.393201 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-22 19:29:26.586467 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-22 19:29:26.586553 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-22 19:29:26.586668 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-22 19:29:26.586686 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-22 19:29:26.586703 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-22 19:29:26.586747 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-22 19:29:26.586757 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-22 19:29:26.586765 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-06-22 19:29:26.586772 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-22 19:29:26.586780 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-22 19:29:26.586788 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-22 19:29:26.586796 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-22 19:29:26.586803 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-22 19:29:26.586811 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-22 19:29:26.586819 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-22 19:29:26.595541 | orchestrator | ++ semver latest 7.0.0 2025-06-22 19:29:26.653200 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-22 19:29:26.653291 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-22 19:29:26.653306 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-22 19:29:26.656787 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-22 19:29:28.470450 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:29:28.470559 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:29:28.470630 | orchestrator | Registering Redlock._release_script 2025-06-22 19:29:28.637736 | orchestrator | 2025-06-22 19:29:28 | INFO  | Task 1af4ae03-efc1-4578-963c-c7c9a096f28d (resolvconf) was prepared for execution. 2025-06-22 19:29:28.637841 | orchestrator | 2025-06-22 19:29:28 | INFO  | It takes a moment until task 1af4ae03-efc1-4578-963c-c7c9a096f28d (resolvconf) has been started and output is visible here. 2025-06-22 19:29:41.388167 | orchestrator | 2025-06-22 19:29:41.388290 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-22 19:29:41.388306 | orchestrator | 2025-06-22 19:29:41.388318 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:29:41.388330 | orchestrator | Sunday 22 June 2025 19:29:32 +0000 (0:00:00.109) 0:00:00.109 *********** 2025-06-22 19:29:41.388341 | orchestrator | ok: [testbed-manager] 2025-06-22 19:29:41.388353 | orchestrator | 2025-06-22 19:29:41.388364 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-22 19:29:41.388381 | orchestrator | Sunday 22 June 2025 19:29:35 +0000 (0:00:03.331) 0:00:03.440 *********** 2025-06-22 19:29:41.388392 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:29:41.388425 | orchestrator | 2025-06-22 19:29:41.388437 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-22 19:29:41.388448 | orchestrator | Sunday 22 June 2025 19:29:35 +0000 (0:00:00.068) 0:00:03.508 *********** 2025-06-22 19:29:41.388459 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-22 19:29:41.388470 | orchestrator | 2025-06-22 19:29:41.388481 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-22 19:29:41.388491 | orchestrator | Sunday 22 June 2025 19:29:35 +0000 (0:00:00.084) 0:00:03.593 *********** 2025-06-22 19:29:41.388502 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:29:41.388513 | orchestrator | 2025-06-22 19:29:41.388523 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-22 19:29:41.388534 | orchestrator | Sunday 22 June 2025 19:29:35 +0000 (0:00:00.080) 0:00:03.673 *********** 2025-06-22 19:29:41.388544 | orchestrator | ok: [testbed-manager] 2025-06-22 19:29:41.388650 | orchestrator | 2025-06-22 19:29:41.388664 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-22 19:29:41.388675 | orchestrator | Sunday 22 June 2025 19:29:36 +0000 (0:00:00.993) 0:00:04.667 *********** 2025-06-22 19:29:41.388687 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:29:41.388699 | orchestrator | 2025-06-22 19:29:41.388711 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-22 19:29:41.388722 | orchestrator | Sunday 22 June 2025 19:29:36 +0000 (0:00:00.068) 0:00:04.735 *********** 2025-06-22 19:29:41.388734 | orchestrator | ok: [testbed-manager] 2025-06-22 19:29:41.388746 | orchestrator | 2025-06-22 19:29:41.388758 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-22 19:29:41.388770 | orchestrator | Sunday 22 June 2025 19:29:37 +0000 (0:00:00.487) 0:00:05.222 *********** 2025-06-22 19:29:41.388781 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:29:41.388793 | orchestrator | 2025-06-22 19:29:41.388805 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-22 19:29:41.388818 | orchestrator | Sunday 22 June 2025 19:29:37 +0000 (0:00:00.084) 0:00:05.307 *********** 2025-06-22 19:29:41.388830 | orchestrator | changed: [testbed-manager] 2025-06-22 19:29:41.388842 | orchestrator | 2025-06-22 19:29:41.388854 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-22 19:29:41.388866 | orchestrator | Sunday 22 June 2025 19:29:37 +0000 (0:00:00.501) 0:00:05.808 *********** 2025-06-22 19:29:41.388878 | orchestrator | changed: [testbed-manager] 2025-06-22 19:29:41.388890 | orchestrator | 2025-06-22 19:29:41.388902 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-22 19:29:41.388914 | orchestrator | Sunday 22 June 2025 19:29:39 +0000 (0:00:01.054) 0:00:06.863 *********** 2025-06-22 19:29:41.388926 | orchestrator | ok: [testbed-manager] 2025-06-22 19:29:41.388938 | orchestrator | 2025-06-22 19:29:41.388950 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-22 19:29:41.388973 | orchestrator | Sunday 22 June 2025 19:29:39 +0000 (0:00:00.939) 0:00:07.803 *********** 2025-06-22 19:29:41.388985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-22 19:29:41.388998 | orchestrator | 2025-06-22 19:29:41.389010 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-22 19:29:41.389022 | orchestrator | Sunday 22 June 2025 19:29:40 +0000 (0:00:00.076) 0:00:07.879 *********** 2025-06-22 19:29:41.389033 | orchestrator | changed: [testbed-manager] 2025-06-22 19:29:41.389044 | orchestrator | 2025-06-22 19:29:41.389055 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:29:41.389066 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:29:41.389087 | orchestrator | 2025-06-22 19:29:41.389098 | orchestrator | 2025-06-22 19:29:41.389109 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:29:41.389119 | orchestrator | Sunday 22 June 2025 19:29:41 +0000 (0:00:01.104) 0:00:08.984 *********** 2025-06-22 19:29:41.389130 | orchestrator | =============================================================================== 2025-06-22 19:29:41.389140 | orchestrator | Gathering Facts --------------------------------------------------------- 3.33s 2025-06-22 19:29:41.389151 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.10s 2025-06-22 19:29:41.389162 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.05s 2025-06-22 19:29:41.389172 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 0.99s 2025-06-22 19:29:41.389183 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2025-06-22 19:29:41.389193 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.50s 2025-06-22 19:29:41.389221 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.49s 2025-06-22 19:29:41.389233 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-06-22 19:29:41.389244 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-06-22 19:29:41.389254 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.08s 2025-06-22 19:29:41.389265 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-06-22 19:29:41.389275 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-06-22 19:29:41.389286 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-06-22 19:29:41.630735 | orchestrator | + osism apply sshconfig 2025-06-22 19:29:43.326412 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:29:43.326498 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:29:43.326509 | orchestrator | Registering Redlock._release_script 2025-06-22 19:29:43.382255 | orchestrator | 2025-06-22 19:29:43 | INFO  | Task 582e1ab0-d244-46b1-a004-b43d187ab7eb (sshconfig) was prepared for execution. 2025-06-22 19:29:43.382343 | orchestrator | 2025-06-22 19:29:43 | INFO  | It takes a moment until task 582e1ab0-d244-46b1-a004-b43d187ab7eb (sshconfig) has been started and output is visible here. 2025-06-22 19:29:54.696156 | orchestrator | 2025-06-22 19:29:54.696277 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-22 19:29:54.696293 | orchestrator | 2025-06-22 19:29:54.696305 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-22 19:29:54.696317 | orchestrator | Sunday 22 June 2025 19:29:47 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-06-22 19:29:54.696328 | orchestrator | ok: [testbed-manager] 2025-06-22 19:29:54.696341 | orchestrator | 2025-06-22 19:29:54.696352 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-22 19:29:54.696363 | orchestrator | Sunday 22 June 2025 19:29:47 +0000 (0:00:00.527) 0:00:00.701 *********** 2025-06-22 19:29:54.696373 | orchestrator | changed: [testbed-manager] 2025-06-22 19:29:54.696385 | orchestrator | 2025-06-22 19:29:54.696396 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-22 19:29:54.696407 | orchestrator | Sunday 22 June 2025 19:29:48 +0000 (0:00:00.502) 0:00:01.203 *********** 2025-06-22 19:29:54.696439 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-22 19:29:54.696452 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-22 19:29:54.696463 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-22 19:29:54.696474 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-22 19:29:54.696484 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-22 19:29:54.696495 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-22 19:29:54.696531 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-22 19:29:54.696542 | orchestrator | 2025-06-22 19:29:54.696553 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-22 19:29:54.696564 | orchestrator | Sunday 22 June 2025 19:29:53 +0000 (0:00:05.586) 0:00:06.790 *********** 2025-06-22 19:29:54.696643 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:29:54.696662 | orchestrator | 2025-06-22 19:29:54.696675 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-22 19:29:54.696688 | orchestrator | Sunday 22 June 2025 19:29:53 +0000 (0:00:00.071) 0:00:06.862 *********** 2025-06-22 19:29:54.696707 | orchestrator | changed: [testbed-manager] 2025-06-22 19:29:54.696725 | orchestrator | 2025-06-22 19:29:54.696744 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:29:54.696763 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:29:54.696782 | orchestrator | 2025-06-22 19:29:54.696800 | orchestrator | 2025-06-22 19:29:54.696819 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:29:54.696839 | orchestrator | Sunday 22 June 2025 19:29:54 +0000 (0:00:00.585) 0:00:07.448 *********** 2025-06-22 19:29:54.696858 | orchestrator | =============================================================================== 2025-06-22 19:29:54.696876 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.59s 2025-06-22 19:29:54.696897 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2025-06-22 19:29:54.696917 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.53s 2025-06-22 19:29:54.696936 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.50s 2025-06-22 19:29:54.696949 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-06-22 19:29:54.930477 | orchestrator | + osism apply known-hosts 2025-06-22 19:29:56.621388 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:29:56.621479 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:29:56.621490 | orchestrator | Registering Redlock._release_script 2025-06-22 19:29:56.675149 | orchestrator | 2025-06-22 19:29:56 | INFO  | Task 526f0411-07f0-49d7-bc1b-745f2a553538 (known-hosts) was prepared for execution. 2025-06-22 19:29:56.675247 | orchestrator | 2025-06-22 19:29:56 | INFO  | It takes a moment until task 526f0411-07f0-49d7-bc1b-745f2a553538 (known-hosts) has been started and output is visible here. 2025-06-22 19:30:12.551625 | orchestrator | 2025-06-22 19:30:12.551712 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-22 19:30:12.551726 | orchestrator | 2025-06-22 19:30:12.551737 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-22 19:30:12.551749 | orchestrator | Sunday 22 June 2025 19:30:00 +0000 (0:00:00.165) 0:00:00.165 *********** 2025-06-22 19:30:12.551760 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-22 19:30:12.551771 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-22 19:30:12.551781 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-22 19:30:12.551792 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-22 19:30:12.551803 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-22 19:30:12.551813 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-22 19:30:12.551824 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-22 19:30:12.551834 | orchestrator | 2025-06-22 19:30:12.551845 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-22 19:30:12.551856 | orchestrator | Sunday 22 June 2025 19:30:06 +0000 (0:00:05.938) 0:00:06.104 *********** 2025-06-22 19:30:12.551868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-22 19:30:12.551902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-22 19:30:12.551923 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-22 19:30:12.551935 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-22 19:30:12.551946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-22 19:30:12.551956 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-22 19:30:12.551967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-22 19:30:12.551977 | orchestrator | 2025-06-22 19:30:12.551988 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:12.551999 | orchestrator | Sunday 22 June 2025 19:30:06 +0000 (0:00:00.167) 0:00:06.271 *********** 2025-06-22 19:30:12.552012 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCt5Lc3A7grHgH3URPglan8WaPLXQPk+ewRGfIEzzZ+4ulTQ5wAuOl0zig6lmep/zsGqLZhefTMfEGgrXV9AnhZnDXXlMjLdOnIQsP+liPA4HtGo4Ie+ppGDhmDQ2UASDStfZO9Pge5mxcbQ1Fs66oMcegzkI5H+gBLaotmRUAVcMxSj6yJBQOdxO8/DHClcBII3OAM3W+LRpJ8QwwSF+OpEVtnw/mmisdKAkhDS9xxVs4OV3KIhghFYfyAm+enb0TPU0aCnoWZS1S4k30fd14jgtZzYqShoh05caT3Axfo+kTo8MbQQaUYCba/mZPe+PR/ElSrij1981jqut4Or/lKXjMo6Ylph24TFv0gwBhmsBsb6Sevkw8WYBWg02e4Htv++iMvNQxva0xkb5c8I6LLmydpC28E86mu8q37MDiZIwSQeaeeVjpw/uSKHR6psGYK3ubHmWuivjoRhzRmPdTd+UeoFhLb94UBYiD3U87lVRDX9K+JWoNQ+QE2oRoD9hU=) 2025-06-22 19:30:12.552027 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL6OKywmJDPb6frdPB/9hSmwxZQQ7RUNGA2Q3oDFHKISw7ecUPwUjc6527IAr9IXRZPb8rxipnPnG6IHrYPPKJc=) 2025-06-22 19:30:12.552039 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG0qUTZOoW2rSatR3t4JIypokf4bCzHl6c3CRAXLamR1) 2025-06-22 19:30:12.552050 | orchestrator | 2025-06-22 19:30:12.552061 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:12.552071 | orchestrator | Sunday 22 June 2025 19:30:07 +0000 (0:00:01.188) 0:00:07.460 *********** 2025-06-22 19:30:12.552082 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDOAt8r2Zh7gDiJOw1a9brSNbafLDwAqMgPgL7RGYb8RIK86Q/ZwDjZoZT02Prf/Zs5iCkCEPn46T2nxtxKYhPg=) 2025-06-22 19:30:12.552092 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO2eG8GDJ3XF48cGsZpObJyOZ+io1RM1ByrZ6w4TKDIZ) 2025-06-22 19:30:12.552128 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDl2cbGw6S0Gqu8v5n/PFiye9HdbmkUtUL3Ob78xDGz/w6l/TEo0c0UDS6osNL4pAVlIgmYIF78D3UcPxJUrgLUwC6jcHQ9DEbabYilk3F54ycfvDAU0bJ7lZNLSqUMCT+IOwKAS2vnG/yfqZVlZksYTf5g3QjTlxHktc4EvJtLqYG3MHOkq2N68AyC5ZOEeSxRR+R4P6eY9aLoptEk4sllqXRRNscpunnDBMyi0fHILPfZlgmvpk0G4GnZZrPgZjU+gCLXBGNqf6jbOVBWYKwDMXa3+bpH42lLwxLPf646pxlHyYc6y7EYy89f7ZOyMP4L0mLYz0XqIaTlVlptuH1D+Ma72DxSaiisn1gDBflTNO4iwEQP3KjmkEpyPAIIAT9EgfQm6+eODA7oFTpdRfLItmwBv2ZGfmJTE2CdMugmU0QVGVk/GCLts7NGugG5BIlB2A8fLxd7AHI37fjW45HZLb7AYam2aGNWL9diJoQiWMX1iS3/4fUbKE/E67VKN3M=) 2025-06-22 19:30:12.552141 | orchestrator | 2025-06-22 19:30:12.552161 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:12.552173 | orchestrator | Sunday 22 June 2025 19:30:08 +0000 (0:00:00.943) 0:00:08.403 *********** 2025-06-22 19:30:12.552186 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDECzsth4KqKXDkeZHT+IODI+CqzuH5yoHiF6QWZEtO9suIDFsGF7hJL5KVz+1oXaPWzoyeWGBuyv5MDJEmVwOZGi6ssmefeKZYka1Q6bOQA3k+v2pnmhA9rpw2N1l0RHfLxlZOgpzLKIrURku8rf39+Sj1o2JjG/NV29RUwz4sye9WqmyK7AvEmi0dtR2Jr6p9ce2ZR1luFM7ujve92aJE/Fw++ScqNoXY7Z/6HPIbT72dCIcyRqAOwVvKecLheNA2D2x1xasWnnJYq3Jble25HimWYa+AhxyC3tHIKAGcfMN4miIJRt9cc+3FKdjx/9Kaxp3hEiRLT/hDxujEsj6js1zLcsrB5Rudd/SMnERvUDOGiOLMG171q420FsC7TezS8cuZImxZPFc0/wQpFRpjrjJiiY+TTEGR8XZbAGIrRSBeJBhtW4iwLihnvrC44+UvPRE+6S2Z7XDNMZfzrM20sW0OmfYyG/Vc1hvGBfbxqrs7qaZ4vELpFCre3Gy4QQE=) 2025-06-22 19:30:12.552258 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICEwOn+7FNcwhjVIpVTQBj40q0aNCM/PiyN3wIeQMBc6) 2025-06-22 19:30:12.552270 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH/VsLNmA3jSaIlr2AKU2YuEOp9UrWSGno7GYc/aVEOnyQnZzDD8u9X3moEHqF+KzR4ZQZ7OMlKhnte4CAP7alQ=) 2025-06-22 19:30:12.552281 | orchestrator | 2025-06-22 19:30:12.552291 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:12.552302 | orchestrator | Sunday 22 June 2025 19:30:09 +0000 (0:00:00.939) 0:00:09.343 *********** 2025-06-22 19:30:12.552313 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHjyNY/+oEZV8SsO7GtL+4H/vnVtSFocFBQ73pula01A) 2025-06-22 19:30:12.552324 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCJZAaXM1AoMOsj8SdIRerTHxbDwmqZ7PJfcd6N5i7s/mDkP8nPzpY+o7k5dCFzZyaLMp0iTfhvmQidZ8D0MAPsaxMyYAqqN8BTdhsJoC9TAPdqVCsdBtyOnIuFoSgTlAABRbLuFR8LAGkWnFLzO99hHEEAW23nQGy6DllmH8mJ+aiLxiWsEcUR8d6IQleUhWxc5JNRxYbQwOF/8k4+icoNXPhQ/WNhSdt+lLDJ7MNQ7w0+a3pm/kdOqrgnb1kse5ra92j6LvjMGxh7ReswuBvvZQfz1TOIWovH7GxV6JAhzU/xwNCNI0V45yfosVWWtAIwx/jHFT4XjsykiF3Jj0Yz+epjTo6i7qpk4ODGTdwfpo0lD6vzhjXvIB/l4f1BstARgs3+xJ5K4g6KXOmmouZmKVRABmXwuhHHC0B2g7/fEiNAFMZ676hNqh+DyUl2K7LMPwP8KsIndJKb3z8GguHj9ct/j++zfCzhH4QejKP4mx02ERkspLE6ewEjk+REgMk=) 2025-06-22 19:30:12.552335 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMaXMVXGwMDB6uOdto4y10/S6owt70k+ukrkKAniA9EO7uhyXSoXDZZedCr+QoPadZZNubRs9XVa5p4185T9gA8=) 2025-06-22 19:30:12.552346 | orchestrator | 2025-06-22 19:30:12.552357 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:12.552367 | orchestrator | Sunday 22 June 2025 19:30:10 +0000 (0:00:00.966) 0:00:10.309 *********** 2025-06-22 19:30:12.552378 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuruwfq+YnpF1pFGUmtiwVFHenxgrp9KBELH2v/D8hEb4hXRBFl7S7leUANkXMVHOcoar1DFgS4bm4KgNVnfVRFavfxExVF/ycJ4Ic66B00hIrPV14MJsj/bwKfWZy8gnNsTz12SWi13rt3K9IdA+QOvqeqCz9lswm96EaR8Ph46PaA8kzGqwc5hfiBfbljKtO9V1bCpoty4qtPCHR8nH1TV+C/y0+dlBrPXXKsCOpDCOD64EceTZophCyBGfKGl1n/qiibz+T+4rl+cJycBPwGbvJqTapR7OLkFu2IDHSM89Gw0GeMxNPo+x1mSCxVdfLQLau0sF89xhnjs6HaG8X7tB8Z0P8ck7oOGwqxnvqrLyPaqmYCTwkpiopq2mhgWbEE5qY8Z39F/bbPU1h7HiVjlUojm2mvoMk7UW7KgJsOcNZ9tILJeAoG+9EISZSDlDWYbQs7Rwj55WE2LxQm1uRP3v6OMmgbUHoM1u85wFo4F6An5X6yGWQkIfYmwI/Hzc=) 2025-06-22 19:30:12.552390 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI8h6IB8SzxgwVl6D1f5/HHQX64gGUkKf9PcrYZh3gxybLXWV9KFOHNCwOJ2k6gDAr6r81LN8YXrSt7fSBumNPQ=) 2025-06-22 19:30:12.552401 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJZ84i3w3e/uszrRKjOwwpoenMz95quMIylPinAiQ17v) 2025-06-22 19:30:12.552411 | orchestrator | 2025-06-22 19:30:12.552422 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:12.552433 | orchestrator | Sunday 22 June 2025 19:30:11 +0000 (0:00:00.943) 0:00:11.253 *********** 2025-06-22 19:30:12.552456 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB+OpG/vvRCTQvp7+icGaaezKMmpkWS/o3LAFm5SMCw3D//2FfXQ9/WGzOp3vZndsFJ1PnnglLs8sxRqnZZN4XY=) 2025-06-22 19:30:22.608276 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHURG7X+H+x89B20zxB2a8SBz/zfd6eaitrEgJ+V7zsy) 2025-06-22 19:30:22.608397 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbQNzkK4jya8M3GyXb7I0WGvrDCWaHRW5kU9CxNikdryu9y5M2ixE1nCXOYSSaLi7uFL7ZIXUVS5+0EUqNVGDfG6Eu1cLlDCfg2tR/WKutPIWlO7pWxJmVTf2U8UMMhzwoBgLIA5C8jX1ilbWuQ0rVrJH7PKg/jVBxe4AHtksX/7HvBcOLpucf+zwlg9h8IeRQEt9MnGRzrUFKcuRQQJsSrrI4jkK+6FW6VcEapk1m9MRGbrn+rKrvyKuS71DshR3aZXeMOsAay3Z5BVzu47VZCM/n0tLXyOzJvpCBry12P+J1S2kFyJnEqUT7TgbsEz1GiFmfpOEGXtSwkj+lT0FU7oznnxepoclsbqRJL2L3ssMxfD8sVbgksSOQyZW7mb0QCAliDeB/qqd5ucIjtpp0gvAQB15/enIWaTvhxk658iuRctX2syvqFbk1WjGw4g7MPilfvr3WJ2a9z8TtbFqPhr7XHs0zmckssq+00I1LZqZJOQEfNiAG41pEjLTHJrs=) 2025-06-22 19:30:22.608417 | orchestrator | 2025-06-22 19:30:22.608431 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:22.608443 | orchestrator | Sunday 22 June 2025 19:30:12 +0000 (0:00:00.943) 0:00:12.196 *********** 2025-06-22 19:30:22.608455 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA1oz/bpFDN2AksKjuNdsXhpXItMie/x+O98gjXHGFju1Bivhkp846BqKJh3jyk02wxqAdumD1Q+qWieuTPpdbw=) 2025-06-22 19:30:22.608486 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiHI2AlqXxBSnqCJ4QbzppojjONufwfvsfHwWgdKegxVgWNiZ7x0EmEPbJxv/eJEzeCbNMryc28Asw5WwEdXCZ+aQ2AEtf04o41l8ToeNayhpSiqQx3O0rYybOGohVK6mhkkmLjoVcwhCE5GDQ2eVZte93wHZduURKYfLm9UHlp7fyk8NYIhUQ73u5KV0y5WhOivCiuGtJWfWCSd836+wunCagI/Ggm0hKdo13hW26znY/IAUr0tBqYa8w2WHiilYRPiMGOjZT8Ur7lAQHYylEsK46y/CEAgVtNwQkk5kndQf9w/N9BCDMZCzkxHbjfW6/5QGcDJTtm5KnvKGNNZU3xZTdzO8Tk+MSxOlG66KtpeID7Bu4+3O74spBMc1+ex/UHY0djlTsBjevzpjbL2Dn2TTEBq4GmbqZ/UAGxGS03Gf2G/WHAl1fChsLb/NSI8d5LvyrN5vfGCXvnUwzgLMpP9/2YlQwjlHCN1WDVuJUKYmJ3UZmYoA4nzzu04UdY+E=) 2025-06-22 19:30:22.608499 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDW2PfbWUDICIfaz4K+mvazzXoIUnSA3MjMzwKJm7YMR) 2025-06-22 19:30:22.608510 | orchestrator | 2025-06-22 19:30:22.608522 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-22 19:30:22.608533 | orchestrator | Sunday 22 June 2025 19:30:13 +0000 (0:00:00.927) 0:00:13.124 *********** 2025-06-22 19:30:22.608545 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-22 19:30:22.608556 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-22 19:30:22.608566 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-22 19:30:22.608636 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-22 19:30:22.608648 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-22 19:30:22.608658 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-22 19:30:22.608669 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-22 19:30:22.608680 | orchestrator | 2025-06-22 19:30:22.608690 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-22 19:30:22.608702 | orchestrator | Sunday 22 June 2025 19:30:18 +0000 (0:00:04.855) 0:00:17.979 *********** 2025-06-22 19:30:22.608714 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-22 19:30:22.608727 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-22 19:30:22.608738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-22 19:30:22.608772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-22 19:30:22.608784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-22 19:30:22.608795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-22 19:30:22.608806 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-22 19:30:22.608818 | orchestrator | 2025-06-22 19:30:22.608847 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:22.608860 | orchestrator | Sunday 22 June 2025 19:30:18 +0000 (0:00:00.162) 0:00:18.141 *********** 2025-06-22 19:30:22.608872 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIG0qUTZOoW2rSatR3t4JIypokf4bCzHl6c3CRAXLamR1) 2025-06-22 19:30:22.608887 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCt5Lc3A7grHgH3URPglan8WaPLXQPk+ewRGfIEzzZ+4ulTQ5wAuOl0zig6lmep/zsGqLZhefTMfEGgrXV9AnhZnDXXlMjLdOnIQsP+liPA4HtGo4Ie+ppGDhmDQ2UASDStfZO9Pge5mxcbQ1Fs66oMcegzkI5H+gBLaotmRUAVcMxSj6yJBQOdxO8/DHClcBII3OAM3W+LRpJ8QwwSF+OpEVtnw/mmisdKAkhDS9xxVs4OV3KIhghFYfyAm+enb0TPU0aCnoWZS1S4k30fd14jgtZzYqShoh05caT3Axfo+kTo8MbQQaUYCba/mZPe+PR/ElSrij1981jqut4Or/lKXjMo6Ylph24TFv0gwBhmsBsb6Sevkw8WYBWg02e4Htv++iMvNQxva0xkb5c8I6LLmydpC28E86mu8q37MDiZIwSQeaeeVjpw/uSKHR6psGYK3ubHmWuivjoRhzRmPdTd+UeoFhLb94UBYiD3U87lVRDX9K+JWoNQ+QE2oRoD9hU=) 2025-06-22 19:30:22.608901 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBL6OKywmJDPb6frdPB/9hSmwxZQQ7RUNGA2Q3oDFHKISw7ecUPwUjc6527IAr9IXRZPb8rxipnPnG6IHrYPPKJc=) 2025-06-22 19:30:22.608913 | orchestrator | 2025-06-22 19:30:22.608926 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:22.608938 | orchestrator | Sunday 22 June 2025 19:30:19 +0000 (0:00:00.965) 0:00:19.107 *********** 2025-06-22 19:30:22.608950 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIO2eG8GDJ3XF48cGsZpObJyOZ+io1RM1ByrZ6w4TKDIZ) 2025-06-22 19:30:22.608962 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDl2cbGw6S0Gqu8v5n/PFiye9HdbmkUtUL3Ob78xDGz/w6l/TEo0c0UDS6osNL4pAVlIgmYIF78D3UcPxJUrgLUwC6jcHQ9DEbabYilk3F54ycfvDAU0bJ7lZNLSqUMCT+IOwKAS2vnG/yfqZVlZksYTf5g3QjTlxHktc4EvJtLqYG3MHOkq2N68AyC5ZOEeSxRR+R4P6eY9aLoptEk4sllqXRRNscpunnDBMyi0fHILPfZlgmvpk0G4GnZZrPgZjU+gCLXBGNqf6jbOVBWYKwDMXa3+bpH42lLwxLPf646pxlHyYc6y7EYy89f7ZOyMP4L0mLYz0XqIaTlVlptuH1D+Ma72DxSaiisn1gDBflTNO4iwEQP3KjmkEpyPAIIAT9EgfQm6+eODA7oFTpdRfLItmwBv2ZGfmJTE2CdMugmU0QVGVk/GCLts7NGugG5BIlB2A8fLxd7AHI37fjW45HZLb7AYam2aGNWL9diJoQiWMX1iS3/4fUbKE/E67VKN3M=) 2025-06-22 19:30:22.608975 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDOAt8r2Zh7gDiJOw1a9brSNbafLDwAqMgPgL7RGYb8RIK86Q/ZwDjZoZT02Prf/Zs5iCkCEPn46T2nxtxKYhPg=) 2025-06-22 19:30:22.608986 | orchestrator | 2025-06-22 19:30:22.608999 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:22.609011 | orchestrator | Sunday 22 June 2025 19:30:20 +0000 (0:00:01.039) 0:00:20.146 *********** 2025-06-22 19:30:22.609023 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBH/VsLNmA3jSaIlr2AKU2YuEOp9UrWSGno7GYc/aVEOnyQnZzDD8u9X3moEHqF+KzR4ZQZ7OMlKhnte4CAP7alQ=) 2025-06-22 19:30:22.609051 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDECzsth4KqKXDkeZHT+IODI+CqzuH5yoHiF6QWZEtO9suIDFsGF7hJL5KVz+1oXaPWzoyeWGBuyv5MDJEmVwOZGi6ssmefeKZYka1Q6bOQA3k+v2pnmhA9rpw2N1l0RHfLxlZOgpzLKIrURku8rf39+Sj1o2JjG/NV29RUwz4sye9WqmyK7AvEmi0dtR2Jr6p9ce2ZR1luFM7ujve92aJE/Fw++ScqNoXY7Z/6HPIbT72dCIcyRqAOwVvKecLheNA2D2x1xasWnnJYq3Jble25HimWYa+AhxyC3tHIKAGcfMN4miIJRt9cc+3FKdjx/9Kaxp3hEiRLT/hDxujEsj6js1zLcsrB5Rudd/SMnERvUDOGiOLMG171q420FsC7TezS8cuZImxZPFc0/wQpFRpjrjJiiY+TTEGR8XZbAGIrRSBeJBhtW4iwLihnvrC44+UvPRE+6S2Z7XDNMZfzrM20sW0OmfYyG/Vc1hvGBfbxqrs7qaZ4vELpFCre3Gy4QQE=) 2025-06-22 19:30:22.609064 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICEwOn+7FNcwhjVIpVTQBj40q0aNCM/PiyN3wIeQMBc6) 2025-06-22 19:30:22.609076 | orchestrator | 2025-06-22 19:30:22.609088 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:22.609101 | orchestrator | Sunday 22 June 2025 19:30:21 +0000 (0:00:01.044) 0:00:21.191 *********** 2025-06-22 19:30:22.609121 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCJZAaXM1AoMOsj8SdIRerTHxbDwmqZ7PJfcd6N5i7s/mDkP8nPzpY+o7k5dCFzZyaLMp0iTfhvmQidZ8D0MAPsaxMyYAqqN8BTdhsJoC9TAPdqVCsdBtyOnIuFoSgTlAABRbLuFR8LAGkWnFLzO99hHEEAW23nQGy6DllmH8mJ+aiLxiWsEcUR8d6IQleUhWxc5JNRxYbQwOF/8k4+icoNXPhQ/WNhSdt+lLDJ7MNQ7w0+a3pm/kdOqrgnb1kse5ra92j6LvjMGxh7ReswuBvvZQfz1TOIWovH7GxV6JAhzU/xwNCNI0V45yfosVWWtAIwx/jHFT4XjsykiF3Jj0Yz+epjTo6i7qpk4ODGTdwfpo0lD6vzhjXvIB/l4f1BstARgs3+xJ5K4g6KXOmmouZmKVRABmXwuhHHC0B2g7/fEiNAFMZ676hNqh+DyUl2K7LMPwP8KsIndJKb3z8GguHj9ct/j++zfCzhH4QejKP4mx02ERkspLE6ewEjk+REgMk=) 2025-06-22 19:30:26.641444 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMaXMVXGwMDB6uOdto4y10/S6owt70k+ukrkKAniA9EO7uhyXSoXDZZedCr+QoPadZZNubRs9XVa5p4185T9gA8=) 2025-06-22 19:30:26.641528 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHjyNY/+oEZV8SsO7GtL+4H/vnVtSFocFBQ73pula01A) 2025-06-22 19:30:26.641544 | orchestrator | 2025-06-22 19:30:26.641556 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:26.641568 | orchestrator | Sunday 22 June 2025 19:30:22 +0000 (0:00:01.059) 0:00:22.250 *********** 2025-06-22 19:30:26.641929 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCuruwfq+YnpF1pFGUmtiwVFHenxgrp9KBELH2v/D8hEb4hXRBFl7S7leUANkXMVHOcoar1DFgS4bm4KgNVnfVRFavfxExVF/ycJ4Ic66B00hIrPV14MJsj/bwKfWZy8gnNsTz12SWi13rt3K9IdA+QOvqeqCz9lswm96EaR8Ph46PaA8kzGqwc5hfiBfbljKtO9V1bCpoty4qtPCHR8nH1TV+C/y0+dlBrPXXKsCOpDCOD64EceTZophCyBGfKGl1n/qiibz+T+4rl+cJycBPwGbvJqTapR7OLkFu2IDHSM89Gw0GeMxNPo+x1mSCxVdfLQLau0sF89xhnjs6HaG8X7tB8Z0P8ck7oOGwqxnvqrLyPaqmYCTwkpiopq2mhgWbEE5qY8Z39F/bbPU1h7HiVjlUojm2mvoMk7UW7KgJsOcNZ9tILJeAoG+9EISZSDlDWYbQs7Rwj55WE2LxQm1uRP3v6OMmgbUHoM1u85wFo4F6An5X6yGWQkIfYmwI/Hzc=) 2025-06-22 19:30:26.641948 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI8h6IB8SzxgwVl6D1f5/HHQX64gGUkKf9PcrYZh3gxybLXWV9KFOHNCwOJ2k6gDAr6r81LN8YXrSt7fSBumNPQ=) 2025-06-22 19:30:26.641960 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJZ84i3w3e/uszrRKjOwwpoenMz95quMIylPinAiQ17v) 2025-06-22 19:30:26.641971 | orchestrator | 2025-06-22 19:30:26.641982 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:26.641993 | orchestrator | Sunday 22 June 2025 19:30:23 +0000 (0:00:01.064) 0:00:23.315 *********** 2025-06-22 19:30:26.642005 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHURG7X+H+x89B20zxB2a8SBz/zfd6eaitrEgJ+V7zsy) 2025-06-22 19:30:26.642066 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbQNzkK4jya8M3GyXb7I0WGvrDCWaHRW5kU9CxNikdryu9y5M2ixE1nCXOYSSaLi7uFL7ZIXUVS5+0EUqNVGDfG6Eu1cLlDCfg2tR/WKutPIWlO7pWxJmVTf2U8UMMhzwoBgLIA5C8jX1ilbWuQ0rVrJH7PKg/jVBxe4AHtksX/7HvBcOLpucf+zwlg9h8IeRQEt9MnGRzrUFKcuRQQJsSrrI4jkK+6FW6VcEapk1m9MRGbrn+rKrvyKuS71DshR3aZXeMOsAay3Z5BVzu47VZCM/n0tLXyOzJvpCBry12P+J1S2kFyJnEqUT7TgbsEz1GiFmfpOEGXtSwkj+lT0FU7oznnxepoclsbqRJL2L3ssMxfD8sVbgksSOQyZW7mb0QCAliDeB/qqd5ucIjtpp0gvAQB15/enIWaTvhxk658iuRctX2syvqFbk1WjGw4g7MPilfvr3WJ2a9z8TtbFqPhr7XHs0zmckssq+00I1LZqZJOQEfNiAG41pEjLTHJrs=) 2025-06-22 19:30:26.642104 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBB+OpG/vvRCTQvp7+icGaaezKMmpkWS/o3LAFm5SMCw3D//2FfXQ9/WGzOp3vZndsFJ1PnnglLs8sxRqnZZN4XY=) 2025-06-22 19:30:26.642117 | orchestrator | 2025-06-22 19:30:26.642129 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:30:26.642141 | orchestrator | Sunday 22 June 2025 19:30:24 +0000 (0:00:01.025) 0:00:24.341 *********** 2025-06-22 19:30:26.642153 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiHI2AlqXxBSnqCJ4QbzppojjONufwfvsfHwWgdKegxVgWNiZ7x0EmEPbJxv/eJEzeCbNMryc28Asw5WwEdXCZ+aQ2AEtf04o41l8ToeNayhpSiqQx3O0rYybOGohVK6mhkkmLjoVcwhCE5GDQ2eVZte93wHZduURKYfLm9UHlp7fyk8NYIhUQ73u5KV0y5WhOivCiuGtJWfWCSd836+wunCagI/Ggm0hKdo13hW26znY/IAUr0tBqYa8w2WHiilYRPiMGOjZT8Ur7lAQHYylEsK46y/CEAgVtNwQkk5kndQf9w/N9BCDMZCzkxHbjfW6/5QGcDJTtm5KnvKGNNZU3xZTdzO8Tk+MSxOlG66KtpeID7Bu4+3O74spBMc1+ex/UHY0djlTsBjevzpjbL2Dn2TTEBq4GmbqZ/UAGxGS03Gf2G/WHAl1fChsLb/NSI8d5LvyrN5vfGCXvnUwzgLMpP9/2YlQwjlHCN1WDVuJUKYmJ3UZmYoA4nzzu04UdY+E=) 2025-06-22 19:30:26.642166 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA1oz/bpFDN2AksKjuNdsXhpXItMie/x+O98gjXHGFju1Bivhkp846BqKJh3jyk02wxqAdumD1Q+qWieuTPpdbw=) 2025-06-22 19:30:26.642178 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDW2PfbWUDICIfaz4K+mvazzXoIUnSA3MjMzwKJm7YMR) 2025-06-22 19:30:26.642190 | orchestrator | 2025-06-22 19:30:26.642202 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-22 19:30:26.642214 | orchestrator | Sunday 22 June 2025 19:30:25 +0000 (0:00:01.040) 0:00:25.381 *********** 2025-06-22 19:30:26.642227 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-22 19:30:26.642239 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-22 19:30:26.642282 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-22 19:30:26.642295 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-22 19:30:26.642307 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-22 19:30:26.642320 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-22 19:30:26.642332 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-22 19:30:26.642345 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:30:26.642357 | orchestrator | 2025-06-22 19:30:26.642369 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-22 19:30:26.642381 | orchestrator | Sunday 22 June 2025 19:30:25 +0000 (0:00:00.154) 0:00:25.536 *********** 2025-06-22 19:30:26.642393 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:30:26.642403 | orchestrator | 2025-06-22 19:30:26.642427 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-22 19:30:26.642438 | orchestrator | Sunday 22 June 2025 19:30:25 +0000 (0:00:00.061) 0:00:25.597 *********** 2025-06-22 19:30:26.642453 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:30:26.642464 | orchestrator | 2025-06-22 19:30:26.642475 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-22 19:30:26.642486 | orchestrator | Sunday 22 June 2025 19:30:25 +0000 (0:00:00.045) 0:00:25.643 *********** 2025-06-22 19:30:26.642496 | orchestrator | changed: [testbed-manager] 2025-06-22 19:30:26.642507 | orchestrator | 2025-06-22 19:30:26.642518 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:30:26.642536 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:30:26.642547 | orchestrator | 2025-06-22 19:30:26.642558 | orchestrator | 2025-06-22 19:30:26.642569 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:30:26.642598 | orchestrator | Sunday 22 June 2025 19:30:26 +0000 (0:00:00.475) 0:00:26.118 *********** 2025-06-22 19:30:26.642609 | orchestrator | =============================================================================== 2025-06-22 19:30:26.642619 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.94s 2025-06-22 19:30:26.642630 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 4.86s 2025-06-22 19:30:26.642641 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2025-06-22 19:30:26.642652 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-22 19:30:26.642663 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-22 19:30:26.642673 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-22 19:30:26.642684 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-22 19:30:26.642695 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-22 19:30:26.642705 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-22 19:30:26.642716 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-06-22 19:30:26.642726 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-06-22 19:30:26.642737 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-06-22 19:30:26.642748 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-06-22 19:30:26.642758 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-06-22 19:30:26.642769 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.94s 2025-06-22 19:30:26.642780 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.93s 2025-06-22 19:30:26.642790 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.48s 2025-06-22 19:30:26.642801 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-06-22 19:30:26.642812 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-06-22 19:30:26.642823 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-06-22 19:30:26.807136 | orchestrator | + osism apply squid 2025-06-22 19:30:28.351717 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:30:28.351802 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:30:28.351815 | orchestrator | Registering Redlock._release_script 2025-06-22 19:30:28.399775 | orchestrator | 2025-06-22 19:30:28 | INFO  | Task 4cb1468f-09ff-4883-917b-3bdb0fe7b7c9 (squid) was prepared for execution. 2025-06-22 19:30:28.399833 | orchestrator | 2025-06-22 19:30:28 | INFO  | It takes a moment until task 4cb1468f-09ff-4883-917b-3bdb0fe7b7c9 (squid) has been started and output is visible here. 2025-06-22 19:32:21.603343 | orchestrator | 2025-06-22 19:32:21.603515 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-22 19:32:21.603550 | orchestrator | 2025-06-22 19:32:21.603620 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-22 19:32:21.603633 | orchestrator | Sunday 22 June 2025 19:30:31 +0000 (0:00:00.147) 0:00:00.147 *********** 2025-06-22 19:32:21.603649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:32:21.603668 | orchestrator | 2025-06-22 19:32:21.603763 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-22 19:32:21.603776 | orchestrator | Sunday 22 June 2025 19:30:31 +0000 (0:00:00.073) 0:00:00.220 *********** 2025-06-22 19:32:21.603787 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:21.603798 | orchestrator | 2025-06-22 19:32:21.603809 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-22 19:32:21.603820 | orchestrator | Sunday 22 June 2025 19:30:33 +0000 (0:00:01.124) 0:00:01.345 *********** 2025-06-22 19:32:21.603846 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-22 19:32:21.603858 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-22 19:32:21.603871 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-22 19:32:21.603883 | orchestrator | 2025-06-22 19:32:21.603894 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-22 19:32:21.603906 | orchestrator | Sunday 22 June 2025 19:30:34 +0000 (0:00:01.144) 0:00:02.489 *********** 2025-06-22 19:32:21.603918 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-22 19:32:21.603930 | orchestrator | 2025-06-22 19:32:21.603942 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-22 19:32:21.603954 | orchestrator | Sunday 22 June 2025 19:30:35 +0000 (0:00:01.028) 0:00:03.518 *********** 2025-06-22 19:32:21.603967 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:21.603983 | orchestrator | 2025-06-22 19:32:21.604002 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-22 19:32:21.604021 | orchestrator | Sunday 22 June 2025 19:30:35 +0000 (0:00:00.376) 0:00:03.894 *********** 2025-06-22 19:32:21.604041 | orchestrator | changed: [testbed-manager] 2025-06-22 19:32:21.604061 | orchestrator | 2025-06-22 19:32:21.604074 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-22 19:32:21.604086 | orchestrator | Sunday 22 June 2025 19:30:36 +0000 (0:00:00.869) 0:00:04.764 *********** 2025-06-22 19:32:21.604097 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-22 19:32:21.604110 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:21.604121 | orchestrator | 2025-06-22 19:32:21.604133 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-22 19:32:21.604147 | orchestrator | Sunday 22 June 2025 19:31:08 +0000 (0:00:31.530) 0:00:36.295 *********** 2025-06-22 19:32:21.604159 | orchestrator | changed: [testbed-manager] 2025-06-22 19:32:21.604171 | orchestrator | 2025-06-22 19:32:21.604182 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-22 19:32:21.604193 | orchestrator | Sunday 22 June 2025 19:31:20 +0000 (0:00:12.548) 0:00:48.844 *********** 2025-06-22 19:32:21.604203 | orchestrator | Pausing for 60 seconds 2025-06-22 19:32:21.604214 | orchestrator | changed: [testbed-manager] 2025-06-22 19:32:21.604224 | orchestrator | 2025-06-22 19:32:21.604235 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-22 19:32:21.604245 | orchestrator | Sunday 22 June 2025 19:32:20 +0000 (0:01:00.076) 0:01:48.920 *********** 2025-06-22 19:32:21.604256 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:21.604266 | orchestrator | 2025-06-22 19:32:21.604277 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-22 19:32:21.604287 | orchestrator | Sunday 22 June 2025 19:32:20 +0000 (0:00:00.067) 0:01:48.988 *********** 2025-06-22 19:32:21.604297 | orchestrator | changed: [testbed-manager] 2025-06-22 19:32:21.604308 | orchestrator | 2025-06-22 19:32:21.604319 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:32:21.604329 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:32:21.604340 | orchestrator | 2025-06-22 19:32:21.604350 | orchestrator | 2025-06-22 19:32:21.604361 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:32:21.604371 | orchestrator | Sunday 22 June 2025 19:32:21 +0000 (0:00:00.608) 0:01:49.597 *********** 2025-06-22 19:32:21.604392 | orchestrator | =============================================================================== 2025-06-22 19:32:21.604402 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-06-22 19:32:21.604413 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.53s 2025-06-22 19:32:21.604424 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.55s 2025-06-22 19:32:21.604453 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.14s 2025-06-22 19:32:21.604464 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.12s 2025-06-22 19:32:21.604474 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.03s 2025-06-22 19:32:21.604485 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.87s 2025-06-22 19:32:21.604495 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.61s 2025-06-22 19:32:21.604506 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2025-06-22 19:32:21.604516 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2025-06-22 19:32:21.604527 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-06-22 19:32:21.858994 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-22 19:32:21.859439 | orchestrator | ++ semver latest 9.0.0 2025-06-22 19:32:21.907627 | orchestrator | + [[ -1 -lt 0 ]] 2025-06-22 19:32:21.907717 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-22 19:32:21.907732 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-22 19:32:23.698100 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:32:23.698194 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:32:23.698208 | orchestrator | Registering Redlock._release_script 2025-06-22 19:32:23.755722 | orchestrator | 2025-06-22 19:32:23 | INFO  | Task fb1bbedb-d58b-4fe4-86f7-3619fe2fdb8b (operator) was prepared for execution. 2025-06-22 19:32:23.755817 | orchestrator | 2025-06-22 19:32:23 | INFO  | It takes a moment until task fb1bbedb-d58b-4fe4-86f7-3619fe2fdb8b (operator) has been started and output is visible here. 2025-06-22 19:32:39.259670 | orchestrator | 2025-06-22 19:32:39.259791 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-22 19:32:39.259808 | orchestrator | 2025-06-22 19:32:39.259820 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:32:39.259831 | orchestrator | Sunday 22 June 2025 19:32:27 +0000 (0:00:00.146) 0:00:00.146 *********** 2025-06-22 19:32:39.259843 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:39.259855 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:39.259866 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:39.259877 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:39.259888 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:39.259898 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:39.259909 | orchestrator | 2025-06-22 19:32:39.259920 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-22 19:32:39.259930 | orchestrator | Sunday 22 June 2025 19:32:31 +0000 (0:00:03.419) 0:00:03.565 *********** 2025-06-22 19:32:39.259941 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:39.259952 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:39.259962 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:39.259973 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:39.259983 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:39.260010 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:39.260021 | orchestrator | 2025-06-22 19:32:39.260032 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-22 19:32:39.260042 | orchestrator | 2025-06-22 19:32:39.260053 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-22 19:32:39.260063 | orchestrator | Sunday 22 June 2025 19:32:31 +0000 (0:00:00.735) 0:00:04.301 *********** 2025-06-22 19:32:39.260074 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:39.260085 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:39.260118 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:39.260130 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:39.260141 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:39.260153 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:39.260165 | orchestrator | 2025-06-22 19:32:39.260177 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-22 19:32:39.260189 | orchestrator | Sunday 22 June 2025 19:32:31 +0000 (0:00:00.154) 0:00:04.455 *********** 2025-06-22 19:32:39.260201 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:39.260213 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:39.260224 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:39.260236 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:39.260248 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:39.260259 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:39.260272 | orchestrator | 2025-06-22 19:32:39.260285 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-22 19:32:39.260297 | orchestrator | Sunday 22 June 2025 19:32:32 +0000 (0:00:00.141) 0:00:04.597 *********** 2025-06-22 19:32:39.260309 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:39.260322 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:39.260335 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:39.260345 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:39.260356 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:39.260366 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:39.260377 | orchestrator | 2025-06-22 19:32:39.260387 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-22 19:32:39.260398 | orchestrator | Sunday 22 June 2025 19:32:32 +0000 (0:00:00.575) 0:00:05.172 *********** 2025-06-22 19:32:39.260409 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:39.260419 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:39.260430 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:39.260440 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:39.260450 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:39.260460 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:39.260471 | orchestrator | 2025-06-22 19:32:39.260481 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-22 19:32:39.260492 | orchestrator | Sunday 22 June 2025 19:32:33 +0000 (0:00:00.762) 0:00:05.934 *********** 2025-06-22 19:32:39.260502 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-22 19:32:39.260514 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-22 19:32:39.260524 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-22 19:32:39.260535 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-22 19:32:39.260545 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-22 19:32:39.260573 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-22 19:32:39.260584 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-22 19:32:39.260595 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-22 19:32:39.260605 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-22 19:32:39.260616 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-22 19:32:39.260626 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-22 19:32:39.260637 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-22 19:32:39.260647 | orchestrator | 2025-06-22 19:32:39.260658 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-22 19:32:39.260668 | orchestrator | Sunday 22 June 2025 19:32:34 +0000 (0:00:01.168) 0:00:07.103 *********** 2025-06-22 19:32:39.260679 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:39.260689 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:39.260700 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:39.260710 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:39.260720 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:39.260731 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:39.260741 | orchestrator | 2025-06-22 19:32:39.260752 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-22 19:32:39.260771 | orchestrator | Sunday 22 June 2025 19:32:35 +0000 (0:00:01.358) 0:00:08.462 *********** 2025-06-22 19:32:39.260782 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-22 19:32:39.260792 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-22 19:32:39.260803 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-22 19:32:39.260813 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:32:39.260842 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:32:39.260854 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:32:39.260864 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:32:39.260875 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:32:39.260885 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:32:39.260896 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-22 19:32:39.260906 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-22 19:32:39.260917 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-22 19:32:39.260927 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-22 19:32:39.260938 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-22 19:32:39.260948 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-22 19:32:39.260959 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:32:39.260969 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:32:39.260980 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:32:39.260990 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:32:39.261001 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:32:39.261012 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:32:39.261022 | orchestrator | 2025-06-22 19:32:39.261032 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-06-22 19:32:39.261044 | orchestrator | Sunday 22 June 2025 19:32:37 +0000 (0:00:01.244) 0:00:09.707 *********** 2025-06-22 19:32:39.261054 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:39.261065 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:39.261075 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:39.261086 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:39.261096 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:39.261107 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:39.261117 | orchestrator | 2025-06-22 19:32:39.261128 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-22 19:32:39.261138 | orchestrator | Sunday 22 June 2025 19:32:37 +0000 (0:00:00.150) 0:00:09.857 *********** 2025-06-22 19:32:39.261149 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:39.261159 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:39.261169 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:39.261180 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:39.261190 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:39.261201 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:39.261211 | orchestrator | 2025-06-22 19:32:39.261231 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-22 19:32:39.261241 | orchestrator | Sunday 22 June 2025 19:32:37 +0000 (0:00:00.550) 0:00:10.407 *********** 2025-06-22 19:32:39.261252 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:39.261263 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:39.261278 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:39.261289 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:39.261307 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:39.261317 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:39.261328 | orchestrator | 2025-06-22 19:32:39.261339 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-22 19:32:39.261350 | orchestrator | Sunday 22 June 2025 19:32:38 +0000 (0:00:00.163) 0:00:10.571 *********** 2025-06-22 19:32:39.261360 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-22 19:32:39.261371 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:39.261381 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 19:32:39.261392 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:39.261402 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 19:32:39.261413 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:39.261423 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-22 19:32:39.261434 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 19:32:39.261445 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:39.261455 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:39.261465 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 19:32:39.261476 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:39.261487 | orchestrator | 2025-06-22 19:32:39.261497 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-22 19:32:39.261508 | orchestrator | Sunday 22 June 2025 19:32:38 +0000 (0:00:00.679) 0:00:11.251 *********** 2025-06-22 19:32:39.261518 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:39.261529 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:39.261539 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:39.261550 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:39.261580 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:39.261591 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:39.261601 | orchestrator | 2025-06-22 19:32:39.261612 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-22 19:32:39.261623 | orchestrator | Sunday 22 June 2025 19:32:38 +0000 (0:00:00.146) 0:00:11.397 *********** 2025-06-22 19:32:39.261633 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:39.261644 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:39.261654 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:39.261665 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:39.261676 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:39.261686 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:39.261697 | orchestrator | 2025-06-22 19:32:39.261716 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-22 19:32:39.261734 | orchestrator | Sunday 22 June 2025 19:32:39 +0000 (0:00:00.161) 0:00:11.558 *********** 2025-06-22 19:32:39.261753 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:39.261770 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:39.261790 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:39.261808 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:39.261837 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:40.335166 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:40.335289 | orchestrator | 2025-06-22 19:32:40.335306 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-22 19:32:40.335319 | orchestrator | Sunday 22 June 2025 19:32:39 +0000 (0:00:00.165) 0:00:11.724 *********** 2025-06-22 19:32:40.335330 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:40.335340 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:40.335351 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:40.335361 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:40.335372 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:40.335382 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:40.335393 | orchestrator | 2025-06-22 19:32:40.335404 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-22 19:32:40.335414 | orchestrator | Sunday 22 June 2025 19:32:39 +0000 (0:00:00.648) 0:00:12.372 *********** 2025-06-22 19:32:40.335451 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:40.335463 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:40.335473 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:40.335484 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:40.335509 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:40.335520 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:40.335530 | orchestrator | 2025-06-22 19:32:40.335540 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:32:40.335602 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:32:40.335617 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:32:40.335627 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:32:40.335637 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:32:40.335648 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:32:40.335658 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:32:40.335668 | orchestrator | 2025-06-22 19:32:40.335679 | orchestrator | 2025-06-22 19:32:40.335690 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:32:40.335700 | orchestrator | Sunday 22 June 2025 19:32:40 +0000 (0:00:00.219) 0:00:12.591 *********** 2025-06-22 19:32:40.335713 | orchestrator | =============================================================================== 2025-06-22 19:32:40.335725 | orchestrator | Gathering Facts --------------------------------------------------------- 3.42s 2025-06-22 19:32:40.335737 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.36s 2025-06-22 19:32:40.335750 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.25s 2025-06-22 19:32:40.335763 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.17s 2025-06-22 19:32:40.335775 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.76s 2025-06-22 19:32:40.335787 | orchestrator | Do not require tty for all users ---------------------------------------- 0.74s 2025-06-22 19:32:40.335798 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.68s 2025-06-22 19:32:40.335810 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2025-06-22 19:32:40.335822 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.58s 2025-06-22 19:32:40.335833 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.55s 2025-06-22 19:32:40.335845 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.22s 2025-06-22 19:32:40.335856 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2025-06-22 19:32:40.335868 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-06-22 19:32:40.335879 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-06-22 19:32:40.335891 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.15s 2025-06-22 19:32:40.335902 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.15s 2025-06-22 19:32:40.335914 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-06-22 19:32:40.335926 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2025-06-22 19:32:40.573764 | orchestrator | + osism apply --environment custom facts 2025-06-22 19:32:42.253491 | orchestrator | 2025-06-22 19:32:42 | INFO  | Trying to run play facts in environment custom 2025-06-22 19:32:42.257768 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:32:42.258111 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:32:42.258137 | orchestrator | Registering Redlock._release_script 2025-06-22 19:32:42.323584 | orchestrator | 2025-06-22 19:32:42 | INFO  | Task 4b65a3eb-fe8f-4ca9-8943-d73e370f838c (facts) was prepared for execution. 2025-06-22 19:32:42.323658 | orchestrator | 2025-06-22 19:32:42 | INFO  | It takes a moment until task 4b65a3eb-fe8f-4ca9-8943-d73e370f838c (facts) has been started and output is visible here. 2025-06-22 19:33:23.202787 | orchestrator | 2025-06-22 19:33:23.202904 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-22 19:33:23.202921 | orchestrator | 2025-06-22 19:33:23.202932 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-22 19:33:23.202943 | orchestrator | Sunday 22 June 2025 19:32:46 +0000 (0:00:00.083) 0:00:00.083 *********** 2025-06-22 19:33:23.202954 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:23.202966 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:23.202977 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:23.202988 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:23.202998 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:23.203009 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:23.203019 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:23.203029 | orchestrator | 2025-06-22 19:33:23.203040 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-22 19:33:23.203051 | orchestrator | Sunday 22 June 2025 19:32:47 +0000 (0:00:01.397) 0:00:01.481 *********** 2025-06-22 19:33:23.203061 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:23.203072 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:23.203083 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:23.203093 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:23.203103 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:23.203114 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:23.203124 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:23.203135 | orchestrator | 2025-06-22 19:33:23.203145 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-22 19:33:23.203156 | orchestrator | 2025-06-22 19:33:23.203167 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-22 19:33:23.203177 | orchestrator | Sunday 22 June 2025 19:32:48 +0000 (0:00:01.246) 0:00:02.728 *********** 2025-06-22 19:33:23.203188 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:23.203198 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:23.203209 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:23.203220 | orchestrator | 2025-06-22 19:33:23.203231 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-22 19:33:23.203242 | orchestrator | Sunday 22 June 2025 19:32:48 +0000 (0:00:00.106) 0:00:02.835 *********** 2025-06-22 19:33:23.203252 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:23.203263 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:23.203273 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:23.203284 | orchestrator | 2025-06-22 19:33:23.203295 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-22 19:33:23.203308 | orchestrator | Sunday 22 June 2025 19:32:49 +0000 (0:00:00.196) 0:00:03.031 *********** 2025-06-22 19:33:23.203319 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:23.203330 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:23.203363 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:23.203375 | orchestrator | 2025-06-22 19:33:23.203388 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-22 19:33:23.203400 | orchestrator | Sunday 22 June 2025 19:32:49 +0000 (0:00:00.196) 0:00:03.228 *********** 2025-06-22 19:33:23.203413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:33:23.203451 | orchestrator | 2025-06-22 19:33:23.203464 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-22 19:33:23.203476 | orchestrator | Sunday 22 June 2025 19:32:49 +0000 (0:00:00.125) 0:00:03.354 *********** 2025-06-22 19:33:23.203487 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:23.203499 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:23.203511 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:23.203522 | orchestrator | 2025-06-22 19:33:23.203534 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-22 19:33:23.203588 | orchestrator | Sunday 22 June 2025 19:32:49 +0000 (0:00:00.439) 0:00:03.793 *********** 2025-06-22 19:33:23.203602 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:33:23.203614 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:33:23.203625 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:33:23.203637 | orchestrator | 2025-06-22 19:33:23.203650 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-22 19:33:23.203661 | orchestrator | Sunday 22 June 2025 19:32:49 +0000 (0:00:00.106) 0:00:03.899 *********** 2025-06-22 19:33:23.203673 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:23.203683 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:23.203694 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:23.203704 | orchestrator | 2025-06-22 19:33:23.203715 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-22 19:33:23.203725 | orchestrator | Sunday 22 June 2025 19:32:50 +0000 (0:00:01.032) 0:00:04.931 *********** 2025-06-22 19:33:23.203743 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:23.203761 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:23.203772 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:23.203783 | orchestrator | 2025-06-22 19:33:23.203802 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-22 19:33:23.203816 | orchestrator | Sunday 22 June 2025 19:32:51 +0000 (0:00:00.481) 0:00:05.413 *********** 2025-06-22 19:33:23.203826 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:23.203836 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:23.203847 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:23.203857 | orchestrator | 2025-06-22 19:33:23.203870 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-22 19:33:23.203888 | orchestrator | Sunday 22 June 2025 19:32:52 +0000 (0:00:01.060) 0:00:06.474 *********** 2025-06-22 19:33:23.203900 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:23.203910 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:23.203921 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:23.203932 | orchestrator | 2025-06-22 19:33:23.203951 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-22 19:33:23.203964 | orchestrator | Sunday 22 June 2025 19:33:06 +0000 (0:00:13.792) 0:00:20.267 *********** 2025-06-22 19:33:23.203975 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:33:23.203985 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:33:23.203999 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:33:23.204017 | orchestrator | 2025-06-22 19:33:23.204029 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-22 19:33:23.204057 | orchestrator | Sunday 22 June 2025 19:33:06 +0000 (0:00:00.094) 0:00:20.362 *********** 2025-06-22 19:33:23.204068 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:23.204078 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:23.204089 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:23.204099 | orchestrator | 2025-06-22 19:33:23.204109 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-22 19:33:23.204120 | orchestrator | Sunday 22 June 2025 19:33:13 +0000 (0:00:07.462) 0:00:27.824 *********** 2025-06-22 19:33:23.204130 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:23.204140 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:23.204160 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:23.204170 | orchestrator | 2025-06-22 19:33:23.204181 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-22 19:33:23.204191 | orchestrator | Sunday 22 June 2025 19:33:14 +0000 (0:00:00.444) 0:00:28.268 *********** 2025-06-22 19:33:23.204202 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-22 19:33:23.204245 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-22 19:33:23.204257 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-22 19:33:23.204267 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-22 19:33:23.204278 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-22 19:33:23.204289 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-22 19:33:23.204299 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-22 19:33:23.204309 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-22 19:33:23.204320 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-22 19:33:23.204331 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-22 19:33:23.204341 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-22 19:33:23.204352 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-22 19:33:23.204362 | orchestrator | 2025-06-22 19:33:23.204373 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-22 19:33:23.204383 | orchestrator | Sunday 22 June 2025 19:33:17 +0000 (0:00:03.681) 0:00:31.950 *********** 2025-06-22 19:33:23.204394 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:23.204405 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:23.204415 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:23.204426 | orchestrator | 2025-06-22 19:33:23.204436 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:33:23.204447 | orchestrator | 2025-06-22 19:33:23.204457 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:33:23.204468 | orchestrator | Sunday 22 June 2025 19:33:19 +0000 (0:00:01.222) 0:00:33.172 *********** 2025-06-22 19:33:23.204478 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:23.204489 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:23.204500 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:23.204510 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:23.204520 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:23.204531 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:23.204541 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:23.204580 | orchestrator | 2025-06-22 19:33:23.204598 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:33:23.204617 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:33:23.204636 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:33:23.204655 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:33:23.204674 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:33:23.204688 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:33:23.204699 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:33:23.204710 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:33:23.204728 | orchestrator | 2025-06-22 19:33:23.204743 | orchestrator | 2025-06-22 19:33:23.204761 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:33:23.204772 | orchestrator | Sunday 22 June 2025 19:33:23 +0000 (0:00:04.002) 0:00:37.175 *********** 2025-06-22 19:33:23.204782 | orchestrator | =============================================================================== 2025-06-22 19:33:23.204793 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.79s 2025-06-22 19:33:23.204804 | orchestrator | Install required packages (Debian) -------------------------------------- 7.46s 2025-06-22 19:33:23.204814 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.00s 2025-06-22 19:33:23.204825 | orchestrator | Copy fact files --------------------------------------------------------- 3.68s 2025-06-22 19:33:23.204841 | orchestrator | Create custom facts directory ------------------------------------------- 1.40s 2025-06-22 19:33:23.204857 | orchestrator | Copy fact file ---------------------------------------------------------- 1.25s 2025-06-22 19:33:23.204876 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.22s 2025-06-22 19:33:23.390934 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2025-06-22 19:33:23.391036 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-06-22 19:33:23.391048 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-06-22 19:33:23.391059 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2025-06-22 19:33:23.391068 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-06-22 19:33:23.391078 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2025-06-22 19:33:23.391088 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-06-22 19:33:23.391098 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-06-22 19:33:23.391108 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.11s 2025-06-22 19:33:23.391117 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-06-22 19:33:23.391127 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2025-06-22 19:33:23.644085 | orchestrator | + osism apply bootstrap 2025-06-22 19:33:25.358382 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:33:25.358468 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:33:25.358477 | orchestrator | Registering Redlock._release_script 2025-06-22 19:33:25.414447 | orchestrator | 2025-06-22 19:33:25 | INFO  | Task d57051c0-d1db-4f92-8d28-a09a922506d2 (bootstrap) was prepared for execution. 2025-06-22 19:33:25.414543 | orchestrator | 2025-06-22 19:33:25 | INFO  | It takes a moment until task d57051c0-d1db-4f92-8d28-a09a922506d2 (bootstrap) has been started and output is visible here. 2025-06-22 19:33:40.756695 | orchestrator | 2025-06-22 19:33:40.756774 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-22 19:33:40.756781 | orchestrator | 2025-06-22 19:33:40.756787 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-22 19:33:40.756792 | orchestrator | Sunday 22 June 2025 19:33:29 +0000 (0:00:00.121) 0:00:00.121 *********** 2025-06-22 19:33:40.756798 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:40.756804 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:40.756809 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:40.756814 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:40.756819 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:40.756824 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:40.756828 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:40.756833 | orchestrator | 2025-06-22 19:33:40.756838 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:33:40.756843 | orchestrator | 2025-06-22 19:33:40.756847 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:33:40.756865 | orchestrator | Sunday 22 June 2025 19:33:29 +0000 (0:00:00.203) 0:00:00.325 *********** 2025-06-22 19:33:40.756870 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:40.756874 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:40.756879 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:40.756883 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:40.756888 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:40.756893 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:40.756897 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:40.756902 | orchestrator | 2025-06-22 19:33:40.756907 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-22 19:33:40.756912 | orchestrator | 2025-06-22 19:33:40.756916 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:33:40.756921 | orchestrator | Sunday 22 June 2025 19:33:33 +0000 (0:00:03.627) 0:00:03.953 *********** 2025-06-22 19:33:40.756933 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-22 19:33:40.756938 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-22 19:33:40.756943 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-22 19:33:40.756947 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:33:40.756952 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-22 19:33:40.756968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:33:40.756973 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-22 19:33:40.756977 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:33:40.756982 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-22 19:33:40.756992 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-22 19:33:40.756998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 19:33:40.757002 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-22 19:33:40.757007 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-22 19:33:40.757012 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 19:33:40.757017 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-22 19:33:40.757021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 19:33:40.757026 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-22 19:33:40.757031 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-22 19:33:40.757036 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:33:40.757040 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:33:40.757045 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-22 19:33:40.757050 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-22 19:33:40.757054 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-22 19:33:40.757059 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-22 19:33:40.757064 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-22 19:33:40.757069 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-22 19:33:40.757073 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-22 19:33:40.757078 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-22 19:33:40.757083 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-22 19:33:40.757088 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-22 19:33:40.757093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-22 19:33:40.757098 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-22 19:33:40.757102 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-22 19:33:40.757107 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-22 19:33:40.757117 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-22 19:33:40.757121 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-22 19:33:40.757126 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:33:40.757131 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:33:40.757136 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-22 19:33:40.757140 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 19:33:40.757145 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-22 19:33:40.757150 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-22 19:33:40.757155 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 19:33:40.757159 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-22 19:33:40.757164 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-22 19:33:40.757169 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 19:33:40.757173 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:33:40.757188 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-22 19:33:40.757193 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-22 19:33:40.757198 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-22 19:33:40.757203 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-22 19:33:40.757208 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:33:40.757212 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-22 19:33:40.757217 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-22 19:33:40.757222 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-22 19:33:40.757227 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:33:40.757231 | orchestrator | 2025-06-22 19:33:40.757236 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-22 19:33:40.757241 | orchestrator | 2025-06-22 19:33:40.757246 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-22 19:33:40.757251 | orchestrator | Sunday 22 June 2025 19:33:33 +0000 (0:00:00.396) 0:00:04.350 *********** 2025-06-22 19:33:40.757255 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:40.757260 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:40.757265 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:40.757270 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:40.757274 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:40.757279 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:40.757284 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:40.757288 | orchestrator | 2025-06-22 19:33:40.757293 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-22 19:33:40.757298 | orchestrator | Sunday 22 June 2025 19:33:34 +0000 (0:00:01.189) 0:00:05.539 *********** 2025-06-22 19:33:40.757303 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:40.757308 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:40.757312 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:40.757317 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:40.757322 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:40.757326 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:40.757331 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:40.757336 | orchestrator | 2025-06-22 19:33:40.757341 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-22 19:33:40.757346 | orchestrator | Sunday 22 June 2025 19:33:36 +0000 (0:00:01.344) 0:00:06.884 *********** 2025-06-22 19:33:40.757351 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:33:40.757359 | orchestrator | 2025-06-22 19:33:40.757364 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-22 19:33:40.757372 | orchestrator | Sunday 22 June 2025 19:33:36 +0000 (0:00:00.254) 0:00:07.138 *********** 2025-06-22 19:33:40.757377 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:40.757382 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:40.757386 | orchestrator | changed: [testbed-manager] 2025-06-22 19:33:40.757391 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:40.757396 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:40.757401 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:40.757405 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:40.757410 | orchestrator | 2025-06-22 19:33:40.757415 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-22 19:33:40.757420 | orchestrator | Sunday 22 June 2025 19:33:38 +0000 (0:00:01.981) 0:00:09.119 *********** 2025-06-22 19:33:40.757425 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:33:40.757430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:33:40.757436 | orchestrator | 2025-06-22 19:33:40.757441 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-22 19:33:40.757446 | orchestrator | Sunday 22 June 2025 19:33:38 +0000 (0:00:00.259) 0:00:09.379 *********** 2025-06-22 19:33:40.757451 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:40.757455 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:40.757460 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:40.757465 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:40.757469 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:40.757474 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:40.757479 | orchestrator | 2025-06-22 19:33:40.757484 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-22 19:33:40.757488 | orchestrator | Sunday 22 June 2025 19:33:39 +0000 (0:00:01.069) 0:00:10.449 *********** 2025-06-22 19:33:40.757493 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:33:40.757498 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:40.757503 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:40.757507 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:40.757512 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:40.757517 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:40.757522 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:40.757526 | orchestrator | 2025-06-22 19:33:40.757531 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-22 19:33:40.757536 | orchestrator | Sunday 22 June 2025 19:33:40 +0000 (0:00:00.617) 0:00:11.066 *********** 2025-06-22 19:33:40.757541 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:33:40.757562 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:33:40.757568 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:33:40.757572 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:33:40.757577 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:33:40.757582 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:33:40.757586 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:40.757591 | orchestrator | 2025-06-22 19:33:40.757596 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-22 19:33:40.757601 | orchestrator | Sunday 22 June 2025 19:33:40 +0000 (0:00:00.407) 0:00:11.473 *********** 2025-06-22 19:33:40.757606 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:33:40.757610 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:33:40.757618 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:33:52.821429 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:33:52.821530 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:33:52.821539 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:33:52.821592 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:33:52.821600 | orchestrator | 2025-06-22 19:33:52.821607 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-22 19:33:52.821636 | orchestrator | Sunday 22 June 2025 19:33:40 +0000 (0:00:00.219) 0:00:11.693 *********** 2025-06-22 19:33:52.821644 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:33:52.821664 | orchestrator | 2025-06-22 19:33:52.821670 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-22 19:33:52.821677 | orchestrator | Sunday 22 June 2025 19:33:41 +0000 (0:00:00.271) 0:00:11.964 *********** 2025-06-22 19:33:52.821684 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:33:52.821689 | orchestrator | 2025-06-22 19:33:52.821696 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-22 19:33:52.821702 | orchestrator | Sunday 22 June 2025 19:33:41 +0000 (0:00:00.298) 0:00:12.263 *********** 2025-06-22 19:33:52.821708 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:52.821716 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:52.821723 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:52.821728 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:52.821734 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:52.821740 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:52.821746 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:52.821751 | orchestrator | 2025-06-22 19:33:52.821757 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-22 19:33:52.821763 | orchestrator | Sunday 22 June 2025 19:33:42 +0000 (0:00:01.349) 0:00:13.612 *********** 2025-06-22 19:33:52.821769 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:33:52.821775 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:33:52.821781 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:33:52.821786 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:33:52.821792 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:33:52.821797 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:33:52.821803 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:33:52.821808 | orchestrator | 2025-06-22 19:33:52.821814 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-22 19:33:52.821820 | orchestrator | Sunday 22 June 2025 19:33:42 +0000 (0:00:00.197) 0:00:13.810 *********** 2025-06-22 19:33:52.821826 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:52.821831 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:52.821837 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:52.821843 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:52.821849 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:52.821854 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:52.821860 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:52.821865 | orchestrator | 2025-06-22 19:33:52.821870 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-22 19:33:52.821877 | orchestrator | Sunday 22 June 2025 19:33:43 +0000 (0:00:00.547) 0:00:14.358 *********** 2025-06-22 19:33:52.821882 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:33:52.821888 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:33:52.821894 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:33:52.821900 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:33:52.821905 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:33:52.821911 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:33:52.821917 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:33:52.821922 | orchestrator | 2025-06-22 19:33:52.821929 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-22 19:33:52.821936 | orchestrator | Sunday 22 June 2025 19:33:43 +0000 (0:00:00.218) 0:00:14.577 *********** 2025-06-22 19:33:52.821942 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:52.821948 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:52.821961 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:52.821966 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:52.821972 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:52.821978 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:52.821984 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:52.821990 | orchestrator | 2025-06-22 19:33:52.821996 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-22 19:33:52.822002 | orchestrator | Sunday 22 June 2025 19:33:44 +0000 (0:00:00.535) 0:00:15.112 *********** 2025-06-22 19:33:52.822094 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:52.822107 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:52.822116 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:52.822127 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:52.822133 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:52.822139 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:52.822145 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:52.822152 | orchestrator | 2025-06-22 19:33:52.822160 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-22 19:33:52.822166 | orchestrator | Sunday 22 June 2025 19:33:45 +0000 (0:00:01.154) 0:00:16.267 *********** 2025-06-22 19:33:52.822172 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:52.822177 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:52.822184 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:52.822189 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:52.822195 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:52.822201 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:52.822209 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:52.822215 | orchestrator | 2025-06-22 19:33:52.822221 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-22 19:33:52.822227 | orchestrator | Sunday 22 June 2025 19:33:46 +0000 (0:00:01.213) 0:00:17.480 *********** 2025-06-22 19:33:52.822255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:33:52.822263 | orchestrator | 2025-06-22 19:33:52.822268 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-22 19:33:52.822274 | orchestrator | Sunday 22 June 2025 19:33:47 +0000 (0:00:00.388) 0:00:17.868 *********** 2025-06-22 19:33:52.822280 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:33:52.822286 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:52.822292 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:52.822298 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:52.822304 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:52.822309 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:52.822315 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:52.822321 | orchestrator | 2025-06-22 19:33:52.822327 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-22 19:33:52.822332 | orchestrator | Sunday 22 June 2025 19:33:48 +0000 (0:00:01.257) 0:00:19.126 *********** 2025-06-22 19:33:52.822338 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:52.822343 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:52.822349 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:52.822354 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:52.822360 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:52.822365 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:52.822370 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:52.822376 | orchestrator | 2025-06-22 19:33:52.822382 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-22 19:33:52.822388 | orchestrator | Sunday 22 June 2025 19:33:48 +0000 (0:00:00.265) 0:00:19.391 *********** 2025-06-22 19:33:52.822394 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:52.822399 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:52.822405 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:52.822419 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:52.822424 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:52.822430 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:52.822435 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:52.822441 | orchestrator | 2025-06-22 19:33:52.822447 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-22 19:33:52.822452 | orchestrator | Sunday 22 June 2025 19:33:48 +0000 (0:00:00.257) 0:00:19.648 *********** 2025-06-22 19:33:52.822459 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:52.822464 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:52.822470 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:52.822475 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:52.822481 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:52.822487 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:52.822492 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:52.822498 | orchestrator | 2025-06-22 19:33:52.822503 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-22 19:33:52.822509 | orchestrator | Sunday 22 June 2025 19:33:48 +0000 (0:00:00.197) 0:00:19.845 *********** 2025-06-22 19:33:52.822517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:33:52.822524 | orchestrator | 2025-06-22 19:33:52.822530 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-22 19:33:52.822536 | orchestrator | Sunday 22 June 2025 19:33:49 +0000 (0:00:00.276) 0:00:20.122 *********** 2025-06-22 19:33:52.822541 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:52.822575 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:52.822581 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:52.822587 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:52.822592 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:52.822598 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:52.822603 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:52.822608 | orchestrator | 2025-06-22 19:33:52.822614 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-22 19:33:52.822620 | orchestrator | Sunday 22 June 2025 19:33:49 +0000 (0:00:00.526) 0:00:20.648 *********** 2025-06-22 19:33:52.822625 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:33:52.822631 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:33:52.822637 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:33:52.822642 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:33:52.822648 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:33:52.822653 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:33:52.822659 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:33:52.822664 | orchestrator | 2025-06-22 19:33:52.822670 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-22 19:33:52.822675 | orchestrator | Sunday 22 June 2025 19:33:50 +0000 (0:00:00.210) 0:00:20.858 *********** 2025-06-22 19:33:52.822681 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:52.822687 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:52.822692 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:52.822699 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:52.822705 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:52.822717 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:52.822724 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:52.822729 | orchestrator | 2025-06-22 19:33:52.822735 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-22 19:33:52.822740 | orchestrator | Sunday 22 June 2025 19:33:51 +0000 (0:00:01.104) 0:00:21.963 *********** 2025-06-22 19:33:52.822746 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:52.822751 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:52.822757 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:52.822762 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:52.822768 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:52.822780 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:52.822785 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:52.822791 | orchestrator | 2025-06-22 19:33:52.822797 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-22 19:33:52.822802 | orchestrator | Sunday 22 June 2025 19:33:51 +0000 (0:00:00.584) 0:00:22.548 *********** 2025-06-22 19:33:52.822807 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:52.822813 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:52.822819 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:52.822824 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:52.822839 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:30.748638 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:30.748750 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:30.748765 | orchestrator | 2025-06-22 19:34:30.748776 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-22 19:34:30.748788 | orchestrator | Sunday 22 June 2025 19:33:52 +0000 (0:00:01.117) 0:00:23.665 *********** 2025-06-22 19:34:30.748798 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:30.748808 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:30.748818 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:30.748827 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:30.748837 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:30.748846 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:30.748856 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:30.748865 | orchestrator | 2025-06-22 19:34:30.748875 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-22 19:34:30.748884 | orchestrator | Sunday 22 June 2025 19:34:07 +0000 (0:00:14.409) 0:00:38.075 *********** 2025-06-22 19:34:30.748894 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:30.748903 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:30.748912 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:30.748922 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:30.748931 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:30.748941 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:30.748950 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:30.748959 | orchestrator | 2025-06-22 19:34:30.748969 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-22 19:34:30.748979 | orchestrator | Sunday 22 June 2025 19:34:07 +0000 (0:00:00.222) 0:00:38.298 *********** 2025-06-22 19:34:30.748988 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:30.748998 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:30.749007 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:30.749016 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:30.749025 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:30.749035 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:30.749044 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:30.749054 | orchestrator | 2025-06-22 19:34:30.749064 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-22 19:34:30.749074 | orchestrator | Sunday 22 June 2025 19:34:07 +0000 (0:00:00.238) 0:00:38.536 *********** 2025-06-22 19:34:30.749085 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:30.749096 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:30.749106 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:30.749117 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:30.749127 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:30.749138 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:30.749148 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:30.749158 | orchestrator | 2025-06-22 19:34:30.749169 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-22 19:34:30.749180 | orchestrator | Sunday 22 June 2025 19:34:07 +0000 (0:00:00.242) 0:00:38.779 *********** 2025-06-22 19:34:30.749192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:34:30.749205 | orchestrator | 2025-06-22 19:34:30.749240 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-22 19:34:30.749251 | orchestrator | Sunday 22 June 2025 19:34:08 +0000 (0:00:00.283) 0:00:39.063 *********** 2025-06-22 19:34:30.749262 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:30.749272 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:30.749282 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:30.749292 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:30.749303 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:30.749313 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:30.749324 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:30.749335 | orchestrator | 2025-06-22 19:34:30.749346 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-22 19:34:30.749356 | orchestrator | Sunday 22 June 2025 19:34:09 +0000 (0:00:01.764) 0:00:40.827 *********** 2025-06-22 19:34:30.749367 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:30.749377 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:30.749388 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:30.749398 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:30.749409 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:30.749419 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:30.749429 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:30.749440 | orchestrator | 2025-06-22 19:34:30.749451 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-22 19:34:30.749460 | orchestrator | Sunday 22 June 2025 19:34:11 +0000 (0:00:01.074) 0:00:41.902 *********** 2025-06-22 19:34:30.749469 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:30.749478 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:30.749488 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:30.749497 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:30.749506 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:30.749515 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:30.749524 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:30.749533 | orchestrator | 2025-06-22 19:34:30.749611 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-22 19:34:30.749626 | orchestrator | Sunday 22 June 2025 19:34:11 +0000 (0:00:00.896) 0:00:42.798 *********** 2025-06-22 19:34:30.749637 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:34:30.749648 | orchestrator | 2025-06-22 19:34:30.749657 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-22 19:34:30.749668 | orchestrator | Sunday 22 June 2025 19:34:12 +0000 (0:00:00.285) 0:00:43.084 *********** 2025-06-22 19:34:30.749677 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:30.749687 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:30.749696 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:30.749705 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:30.749715 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:30.749724 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:30.749733 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:30.749743 | orchestrator | 2025-06-22 19:34:30.749769 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-22 19:34:30.749780 | orchestrator | Sunday 22 June 2025 19:34:13 +0000 (0:00:01.072) 0:00:44.156 *********** 2025-06-22 19:34:30.749789 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:34:30.749798 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:34:30.749807 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:34:30.749817 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:34:30.749826 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:34:30.749835 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:34:30.749844 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:34:30.749853 | orchestrator | 2025-06-22 19:34:30.749862 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-22 19:34:30.749884 | orchestrator | Sunday 22 June 2025 19:34:13 +0000 (0:00:00.298) 0:00:44.455 *********** 2025-06-22 19:34:30.749893 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:30.749902 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:30.749911 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:30.749920 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:30.749930 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:30.749939 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:30.749948 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:30.749957 | orchestrator | 2025-06-22 19:34:30.749967 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-22 19:34:30.749976 | orchestrator | Sunday 22 June 2025 19:34:25 +0000 (0:00:11.878) 0:00:56.333 *********** 2025-06-22 19:34:30.749985 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:30.749994 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:30.750004 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:30.750013 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:30.750078 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:30.750088 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:30.750097 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:30.750106 | orchestrator | 2025-06-22 19:34:30.750116 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-22 19:34:30.750125 | orchestrator | Sunday 22 June 2025 19:34:26 +0000 (0:00:01.186) 0:00:57.519 *********** 2025-06-22 19:34:30.750169 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:30.750179 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:30.750189 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:30.750198 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:30.750207 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:30.750216 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:30.750226 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:30.750235 | orchestrator | 2025-06-22 19:34:30.750244 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-22 19:34:30.750254 | orchestrator | Sunday 22 June 2025 19:34:27 +0000 (0:00:00.931) 0:00:58.451 *********** 2025-06-22 19:34:30.750263 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:30.750273 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:30.750282 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:30.750291 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:30.750301 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:30.750310 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:30.750319 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:30.750329 | orchestrator | 2025-06-22 19:34:30.750339 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-22 19:34:30.750349 | orchestrator | Sunday 22 June 2025 19:34:27 +0000 (0:00:00.215) 0:00:58.666 *********** 2025-06-22 19:34:30.750358 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:30.750367 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:30.750377 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:30.750386 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:30.750395 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:30.750404 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:30.750413 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:30.750422 | orchestrator | 2025-06-22 19:34:30.750448 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-22 19:34:30.750458 | orchestrator | Sunday 22 June 2025 19:34:28 +0000 (0:00:00.234) 0:00:58.900 *********** 2025-06-22 19:34:30.750468 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:34:30.750478 | orchestrator | 2025-06-22 19:34:30.750487 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-22 19:34:30.750497 | orchestrator | Sunday 22 June 2025 19:34:28 +0000 (0:00:00.259) 0:00:59.160 *********** 2025-06-22 19:34:30.750514 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:30.750523 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:30.750532 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:30.750542 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:30.750575 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:30.750584 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:30.750593 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:30.750603 | orchestrator | 2025-06-22 19:34:30.750612 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-22 19:34:30.750626 | orchestrator | Sunday 22 June 2025 19:34:29 +0000 (0:00:01.634) 0:01:00.795 *********** 2025-06-22 19:34:30.750636 | orchestrator | changed: [testbed-manager] 2025-06-22 19:34:30.750645 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:34:30.750655 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:34:30.750664 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:34:30.750673 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:34:30.750682 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:34:30.750692 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:34:30.750701 | orchestrator | 2025-06-22 19:34:30.750710 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-22 19:34:30.750720 | orchestrator | Sunday 22 June 2025 19:34:30 +0000 (0:00:00.570) 0:01:01.365 *********** 2025-06-22 19:34:30.750729 | orchestrator | ok: [testbed-manager] 2025-06-22 19:34:30.750739 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:34:30.750748 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:34:30.750757 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:34:30.750767 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:34:30.750776 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:34:30.750785 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:34:30.750794 | orchestrator | 2025-06-22 19:34:30.750813 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-22 19:36:46.431674 | orchestrator | Sunday 22 June 2025 19:34:30 +0000 (0:00:00.227) 0:01:01.593 *********** 2025-06-22 19:36:46.431818 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:46.431835 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:36:46.431847 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:36:46.431857 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:36:46.431868 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:36:46.431879 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:36:46.431890 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:36:46.431900 | orchestrator | 2025-06-22 19:36:46.431912 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-22 19:36:46.431923 | orchestrator | Sunday 22 June 2025 19:34:31 +0000 (0:00:01.195) 0:01:02.789 *********** 2025-06-22 19:36:46.431934 | orchestrator | changed: [testbed-manager] 2025-06-22 19:36:46.431945 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:46.431955 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:46.431966 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:46.431976 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:46.431986 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:46.431997 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:46.432008 | orchestrator | 2025-06-22 19:36:46.432018 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-22 19:36:46.432029 | orchestrator | Sunday 22 June 2025 19:34:33 +0000 (0:00:01.670) 0:01:04.459 *********** 2025-06-22 19:36:46.432040 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:46.432050 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:36:46.432061 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:36:46.432071 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:36:46.432082 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:36:46.432093 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:36:46.432103 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:36:46.432114 | orchestrator | 2025-06-22 19:36:46.432124 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-22 19:36:46.432135 | orchestrator | Sunday 22 June 2025 19:34:35 +0000 (0:00:02.389) 0:01:06.849 *********** 2025-06-22 19:36:46.432171 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:46.432183 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:36:46.432194 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:36:46.432204 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:36:46.432214 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:36:46.432225 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:36:46.432236 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:36:46.432246 | orchestrator | 2025-06-22 19:36:46.432257 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-22 19:36:46.432268 | orchestrator | Sunday 22 June 2025 19:35:13 +0000 (0:00:37.381) 0:01:44.231 *********** 2025-06-22 19:36:46.432278 | orchestrator | changed: [testbed-manager] 2025-06-22 19:36:46.432289 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:46.432300 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:46.432310 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:46.432320 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:46.432331 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:46.432347 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:46.432365 | orchestrator | 2025-06-22 19:36:46.432384 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-22 19:36:46.432403 | orchestrator | Sunday 22 June 2025 19:36:30 +0000 (0:01:16.890) 0:03:01.121 *********** 2025-06-22 19:36:46.432420 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:46.432437 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:36:46.432459 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:36:46.432483 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:36:46.432502 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:36:46.432519 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:36:46.432537 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:36:46.432600 | orchestrator | 2025-06-22 19:36:46.432612 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-22 19:36:46.432624 | orchestrator | Sunday 22 June 2025 19:36:32 +0000 (0:00:01.836) 0:03:02.957 *********** 2025-06-22 19:36:46.432635 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:36:46.432645 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:36:46.432656 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:36:46.432666 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:36:46.432677 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:36:46.432687 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:36:46.432698 | orchestrator | changed: [testbed-manager] 2025-06-22 19:36:46.432708 | orchestrator | 2025-06-22 19:36:46.432719 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-22 19:36:46.432730 | orchestrator | Sunday 22 June 2025 19:36:44 +0000 (0:00:12.060) 0:03:15.017 *********** 2025-06-22 19:36:46.432757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-22 19:36:46.432784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-22 19:36:46.432824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-22 19:36:46.432856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-22 19:36:46.432868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-22 19:36:46.432879 | orchestrator | 2025-06-22 19:36:46.432890 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-22 19:36:46.432900 | orchestrator | Sunday 22 June 2025 19:36:44 +0000 (0:00:00.396) 0:03:15.414 *********** 2025-06-22 19:36:46.432911 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 19:36:46.432922 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:46.432933 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 19:36:46.432944 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 19:36:46.432954 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:36:46.432965 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:36:46.432975 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 19:36:46.432985 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:36:46.432996 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 19:36:46.433007 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 19:36:46.433018 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 19:36:46.433028 | orchestrator | 2025-06-22 19:36:46.433039 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-22 19:36:46.433049 | orchestrator | Sunday 22 June 2025 19:36:46 +0000 (0:00:01.717) 0:03:17.131 *********** 2025-06-22 19:36:46.433060 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 19:36:46.433072 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 19:36:46.433082 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 19:36:46.433093 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 19:36:46.433103 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 19:36:46.433114 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 19:36:46.433124 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 19:36:46.433135 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 19:36:46.433145 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 19:36:46.433155 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 19:36:46.433166 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 19:36:46.433176 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 19:36:46.433193 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 19:36:46.433204 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 19:36:46.433214 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 19:36:46.433225 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 19:36:46.433235 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 19:36:46.433246 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 19:36:46.433256 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 19:36:46.433267 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 19:36:46.433287 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 19:36:54.770981 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 19:36:54.771092 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 19:36:54.771106 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 19:36:54.771118 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 19:36:54.771129 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 19:36:54.771140 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 19:36:54.771151 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 19:36:54.771162 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 19:36:54.771173 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 19:36:54.771184 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 19:36:54.771195 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 19:36:54.771206 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 19:36:54.771216 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 19:36:54.771227 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 19:36:54.771238 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:54.771250 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 19:36:54.771261 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 19:36:54.771272 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 19:36:54.771283 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 19:36:54.771293 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 19:36:54.771304 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:36:54.771315 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:36:54.771325 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:36:54.771336 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-22 19:36:54.771346 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-22 19:36:54.771382 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-22 19:36:54.771393 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-22 19:36:54.771404 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-22 19:36:54.771414 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-22 19:36:54.771425 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-22 19:36:54.771435 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-22 19:36:54.771445 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-22 19:36:54.771456 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-22 19:36:54.771485 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-22 19:36:54.771496 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-22 19:36:54.771512 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-22 19:36:54.771525 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-22 19:36:54.771537 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-22 19:36:54.771575 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-22 19:36:54.771589 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-22 19:36:54.771600 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-22 19:36:54.771612 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-22 19:36:54.771624 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-22 19:36:54.771636 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-22 19:36:54.771664 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-22 19:36:54.771676 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-22 19:36:54.771688 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-22 19:36:54.771700 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-22 19:36:54.771712 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-22 19:36:54.771723 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-22 19:36:54.771735 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-22 19:36:54.771747 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-22 19:36:54.771759 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-22 19:36:54.771771 | orchestrator | 2025-06-22 19:36:54.771783 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-22 19:36:54.771795 | orchestrator | Sunday 22 June 2025 19:36:51 +0000 (0:00:05.709) 0:03:22.841 *********** 2025-06-22 19:36:54.771807 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:36:54.771819 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:36:54.771838 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:36:54.771850 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:36:54.771860 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:36:54.771871 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:36:54.771882 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:36:54.771892 | orchestrator | 2025-06-22 19:36:54.771903 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-22 19:36:54.771913 | orchestrator | Sunday 22 June 2025 19:36:53 +0000 (0:00:01.433) 0:03:24.274 *********** 2025-06-22 19:36:54.771923 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 19:36:54.771934 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:54.771945 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 19:36:54.771955 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 19:36:54.771966 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:36:54.771976 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:36:54.771987 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 19:36:54.771997 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:36:54.772007 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-22 19:36:54.772018 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-22 19:36:54.772029 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-22 19:36:54.772039 | orchestrator | 2025-06-22 19:36:54.772050 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-22 19:36:54.772061 | orchestrator | Sunday 22 June 2025 19:36:53 +0000 (0:00:00.496) 0:03:24.770 *********** 2025-06-22 19:36:54.772071 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 19:36:54.772082 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:54.772092 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 19:36:54.772103 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:36:54.772113 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 19:36:54.772124 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:36:54.772140 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 19:36:54.772151 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:36:54.772161 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-22 19:36:54.772172 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-22 19:36:54.772182 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-22 19:36:54.772193 | orchestrator | 2025-06-22 19:36:54.772203 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-22 19:36:54.772214 | orchestrator | Sunday 22 June 2025 19:36:54 +0000 (0:00:00.558) 0:03:25.329 *********** 2025-06-22 19:36:54.772225 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:54.772235 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:36:54.772246 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:36:54.772256 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:36:54.772273 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:06.486710 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:06.486815 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:06.486828 | orchestrator | 2025-06-22 19:37:06.486841 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-22 19:37:06.486853 | orchestrator | Sunday 22 June 2025 19:36:54 +0000 (0:00:00.291) 0:03:25.621 *********** 2025-06-22 19:37:06.486864 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:06.486876 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:06.486886 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:06.486897 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:06.486907 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:06.486918 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:06.486928 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:06.486944 | orchestrator | 2025-06-22 19:37:06.486955 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-22 19:37:06.486966 | orchestrator | Sunday 22 June 2025 19:37:00 +0000 (0:00:05.747) 0:03:31.368 *********** 2025-06-22 19:37:06.486977 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-22 19:37:06.486988 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:06.486998 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-22 19:37:06.487009 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-22 19:37:06.487020 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:06.487030 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:06.487041 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-22 19:37:06.487051 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-22 19:37:06.487062 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:06.487091 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:06.487103 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-22 19:37:06.487113 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:06.487124 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-22 19:37:06.487138 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:06.487157 | orchestrator | 2025-06-22 19:37:06.487184 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-22 19:37:06.487207 | orchestrator | Sunday 22 June 2025 19:37:00 +0000 (0:00:00.296) 0:03:31.665 *********** 2025-06-22 19:37:06.487228 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-22 19:37:06.487247 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-22 19:37:06.487266 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-22 19:37:06.487286 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-22 19:37:06.487305 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-22 19:37:06.487327 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-22 19:37:06.487349 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-22 19:37:06.487370 | orchestrator | 2025-06-22 19:37:06.487384 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-22 19:37:06.487397 | orchestrator | Sunday 22 June 2025 19:37:01 +0000 (0:00:01.000) 0:03:32.666 *********** 2025-06-22 19:37:06.487413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:37:06.487426 | orchestrator | 2025-06-22 19:37:06.487438 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-22 19:37:06.487448 | orchestrator | Sunday 22 June 2025 19:37:02 +0000 (0:00:00.505) 0:03:33.172 *********** 2025-06-22 19:37:06.487459 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:06.487470 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:06.487481 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:06.487491 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:06.487502 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:06.487512 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:06.487523 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:06.487599 | orchestrator | 2025-06-22 19:37:06.487613 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-22 19:37:06.487623 | orchestrator | Sunday 22 June 2025 19:37:03 +0000 (0:00:01.176) 0:03:34.348 *********** 2025-06-22 19:37:06.487634 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:06.487645 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:06.487655 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:06.487666 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:06.487676 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:06.487687 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:06.487697 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:06.487708 | orchestrator | 2025-06-22 19:37:06.487718 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-22 19:37:06.487729 | orchestrator | Sunday 22 June 2025 19:37:04 +0000 (0:00:00.603) 0:03:34.952 *********** 2025-06-22 19:37:06.487740 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:06.487750 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:06.487761 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:06.487771 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:06.487797 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:06.487808 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:06.487818 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:06.487828 | orchestrator | 2025-06-22 19:37:06.487839 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-22 19:37:06.487850 | orchestrator | Sunday 22 June 2025 19:37:04 +0000 (0:00:00.626) 0:03:35.578 *********** 2025-06-22 19:37:06.487861 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:06.487871 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:06.487882 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:06.487892 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:06.487903 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:06.487913 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:06.487924 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:06.487934 | orchestrator | 2025-06-22 19:37:06.487945 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-22 19:37:06.487963 | orchestrator | Sunday 22 June 2025 19:37:05 +0000 (0:00:00.669) 0:03:36.248 *********** 2025-06-22 19:37:06.488033 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619567.3071034, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:06.488059 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619656.8528996, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:06.488078 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619628.3070328, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:06.488158 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619627.1602495, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:06.488173 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619632.9188552, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:06.488184 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619622.0664046, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:06.488195 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619627.580365, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:06.488217 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619590.2493799, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:31.185309 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619522.3091486, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:31.185473 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619553.7671132, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:31.185528 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619518.1703887, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:31.185628 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619528.02022, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:31.185657 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619520.788023, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:31.185686 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619527.3736281, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:37:31.185707 | orchestrator | 2025-06-22 19:37:31.185730 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-22 19:37:31.185752 | orchestrator | Sunday 22 June 2025 19:37:06 +0000 (0:00:01.077) 0:03:37.325 *********** 2025-06-22 19:37:31.185772 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:31.185792 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:31.185811 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:31.185833 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:31.185855 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:31.185875 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:31.185897 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:31.185918 | orchestrator | 2025-06-22 19:37:31.185940 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-22 19:37:31.185961 | orchestrator | Sunday 22 June 2025 19:37:07 +0000 (0:00:01.098) 0:03:38.424 *********** 2025-06-22 19:37:31.185983 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:31.186005 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:31.186104 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:31.186127 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:31.186175 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:31.186199 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:31.186221 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:31.186243 | orchestrator | 2025-06-22 19:37:31.186265 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-22 19:37:31.186287 | orchestrator | Sunday 22 June 2025 19:37:08 +0000 (0:00:01.196) 0:03:39.621 *********** 2025-06-22 19:37:31.186309 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:31.186347 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:31.186367 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:31.186387 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:31.186407 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:31.186428 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:31.186447 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:31.186467 | orchestrator | 2025-06-22 19:37:31.186487 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-22 19:37:31.186508 | orchestrator | Sunday 22 June 2025 19:37:09 +0000 (0:00:01.097) 0:03:40.718 *********** 2025-06-22 19:37:31.186527 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:31.186547 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:31.186609 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:31.186629 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:31.186648 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:31.186667 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:31.186687 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:31.186706 | orchestrator | 2025-06-22 19:37:31.186725 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-22 19:37:31.186743 | orchestrator | Sunday 22 June 2025 19:37:10 +0000 (0:00:00.277) 0:03:40.996 *********** 2025-06-22 19:37:31.186760 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:31.186778 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:31.186796 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:31.186814 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:31.186832 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:31.186848 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:31.186858 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:31.186869 | orchestrator | 2025-06-22 19:37:31.186879 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-22 19:37:31.186890 | orchestrator | Sunday 22 June 2025 19:37:10 +0000 (0:00:00.758) 0:03:41.754 *********** 2025-06-22 19:37:31.186902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:37:31.186915 | orchestrator | 2025-06-22 19:37:31.186926 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-22 19:37:31.186937 | orchestrator | Sunday 22 June 2025 19:37:11 +0000 (0:00:00.380) 0:03:42.135 *********** 2025-06-22 19:37:31.186947 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:31.186958 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:31.186968 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:31.186979 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:31.186989 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:31.187000 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:31.187010 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:31.187021 | orchestrator | 2025-06-22 19:37:31.187031 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-22 19:37:31.187042 | orchestrator | Sunday 22 June 2025 19:37:19 +0000 (0:00:08.123) 0:03:50.259 *********** 2025-06-22 19:37:31.187052 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:31.187063 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:31.187073 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:31.187084 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:31.187094 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:31.187104 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:31.187114 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:31.187125 | orchestrator | 2025-06-22 19:37:31.187135 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-22 19:37:31.187146 | orchestrator | Sunday 22 June 2025 19:37:20 +0000 (0:00:01.158) 0:03:51.418 *********** 2025-06-22 19:37:31.187156 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:31.187166 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:31.187190 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:31.187200 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:31.187211 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:31.187221 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:31.187232 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:31.187242 | orchestrator | 2025-06-22 19:37:31.187253 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-22 19:37:31.187272 | orchestrator | Sunday 22 June 2025 19:37:21 +0000 (0:00:00.987) 0:03:52.405 *********** 2025-06-22 19:37:31.187284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:37:31.187295 | orchestrator | 2025-06-22 19:37:31.187306 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-22 19:37:31.187316 | orchestrator | Sunday 22 June 2025 19:37:22 +0000 (0:00:00.511) 0:03:52.917 *********** 2025-06-22 19:37:31.187327 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:31.187337 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:31.187347 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:31.187358 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:31.187368 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:31.187379 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:31.187389 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:31.187399 | orchestrator | 2025-06-22 19:37:31.187410 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-22 19:37:31.187420 | orchestrator | Sunday 22 June 2025 19:37:30 +0000 (0:00:08.504) 0:04:01.421 *********** 2025-06-22 19:37:31.187431 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:31.187441 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:31.187452 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:31.187462 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:37.764642 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:37.764775 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:37.764804 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:37.764829 | orchestrator | 2025-06-22 19:38:37.764853 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-22 19:38:37.764876 | orchestrator | Sunday 22 June 2025 19:37:31 +0000 (0:00:00.605) 0:04:02.027 *********** 2025-06-22 19:38:37.764897 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:37.764921 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:37.764943 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:37.764965 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:37.764988 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:37.765011 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:37.765035 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:37.765059 | orchestrator | 2025-06-22 19:38:37.765083 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-22 19:38:37.765108 | orchestrator | Sunday 22 June 2025 19:37:32 +0000 (0:00:01.078) 0:04:03.105 *********** 2025-06-22 19:38:37.765132 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:37.765156 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:37.765178 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:37.765199 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:37.765221 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:37.765243 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:37.765265 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:37.765287 | orchestrator | 2025-06-22 19:38:37.765310 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-22 19:38:37.765332 | orchestrator | Sunday 22 June 2025 19:37:33 +0000 (0:00:01.075) 0:04:04.181 *********** 2025-06-22 19:38:37.765353 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:37.765402 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:37.765454 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:37.765504 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:37.765524 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:37.765543 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:37.765591 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:37.765603 | orchestrator | 2025-06-22 19:38:37.765614 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-22 19:38:37.765626 | orchestrator | Sunday 22 June 2025 19:37:33 +0000 (0:00:00.306) 0:04:04.487 *********** 2025-06-22 19:38:37.765636 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:37.765647 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:37.765657 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:37.765669 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:37.765679 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:37.765689 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:37.765700 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:37.765711 | orchestrator | 2025-06-22 19:38:37.765721 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-22 19:38:37.765732 | orchestrator | Sunday 22 June 2025 19:37:33 +0000 (0:00:00.348) 0:04:04.836 *********** 2025-06-22 19:38:37.765743 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:37.765754 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:37.765764 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:37.765775 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:37.765785 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:37.765796 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:37.765806 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:37.765816 | orchestrator | 2025-06-22 19:38:37.765827 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-22 19:38:37.765838 | orchestrator | Sunday 22 June 2025 19:37:34 +0000 (0:00:00.294) 0:04:05.131 *********** 2025-06-22 19:38:37.765848 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:37.765859 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:37.765869 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:37.765880 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:37.765890 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:37.765900 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:37.765911 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:37.765921 | orchestrator | 2025-06-22 19:38:37.765932 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-22 19:38:37.765943 | orchestrator | Sunday 22 June 2025 19:37:40 +0000 (0:00:05.740) 0:04:10.872 *********** 2025-06-22 19:38:37.765955 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:38:37.765968 | orchestrator | 2025-06-22 19:38:37.765979 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-22 19:38:37.766004 | orchestrator | Sunday 22 June 2025 19:37:40 +0000 (0:00:00.393) 0:04:11.266 *********** 2025-06-22 19:38:37.766015 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-22 19:38:37.766080 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-22 19:38:37.766091 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-22 19:38:37.766101 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-22 19:38:37.766112 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:37.766123 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:37.766133 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-22 19:38:37.766144 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-22 19:38:37.766154 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-22 19:38:37.766165 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-22 19:38:37.766175 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:37.766186 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-22 19:38:37.766207 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-22 19:38:37.766217 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:37.766228 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:37.766238 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-22 19:38:37.766249 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-22 19:38:37.766281 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:37.766293 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-22 19:38:37.766303 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-22 19:38:37.766314 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:37.766325 | orchestrator | 2025-06-22 19:38:37.766335 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-22 19:38:37.766346 | orchestrator | Sunday 22 June 2025 19:37:40 +0000 (0:00:00.363) 0:04:11.630 *********** 2025-06-22 19:38:37.766357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:38:37.766368 | orchestrator | 2025-06-22 19:38:37.766379 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-22 19:38:37.766389 | orchestrator | Sunday 22 June 2025 19:37:41 +0000 (0:00:00.405) 0:04:12.036 *********** 2025-06-22 19:38:37.766400 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-22 19:38:37.766410 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-22 19:38:37.766421 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:37.766432 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-22 19:38:37.766442 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:37.766453 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-22 19:38:37.766463 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:37.766473 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-22 19:38:37.766484 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:37.766495 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:37.766505 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-22 19:38:37.766516 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:37.766526 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-22 19:38:37.766537 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:37.766547 | orchestrator | 2025-06-22 19:38:37.766595 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-22 19:38:37.766616 | orchestrator | Sunday 22 June 2025 19:37:41 +0000 (0:00:00.323) 0:04:12.359 *********** 2025-06-22 19:38:37.766634 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:38:37.766646 | orchestrator | 2025-06-22 19:38:37.766656 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-22 19:38:37.766667 | orchestrator | Sunday 22 June 2025 19:37:42 +0000 (0:00:00.512) 0:04:12.872 *********** 2025-06-22 19:38:37.766677 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:37.766688 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:37.766698 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:37.766709 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:37.766720 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:37.766730 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:37.766741 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:37.766752 | orchestrator | 2025-06-22 19:38:37.766762 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-22 19:38:37.766781 | orchestrator | Sunday 22 June 2025 19:38:15 +0000 (0:00:33.085) 0:04:45.958 *********** 2025-06-22 19:38:37.766792 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:37.766803 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:37.766813 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:37.766823 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:37.766834 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:37.766844 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:37.766855 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:37.766865 | orchestrator | 2025-06-22 19:38:37.766876 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-22 19:38:37.766887 | orchestrator | Sunday 22 June 2025 19:38:23 +0000 (0:00:07.960) 0:04:53.918 *********** 2025-06-22 19:38:37.766897 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:37.766908 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:37.766918 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:37.766929 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:37.766940 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:37.766950 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:37.766961 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:37.766971 | orchestrator | 2025-06-22 19:38:37.766982 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-22 19:38:37.766993 | orchestrator | Sunday 22 June 2025 19:38:30 +0000 (0:00:07.505) 0:05:01.424 *********** 2025-06-22 19:38:37.767003 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:37.767014 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:37.767024 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:37.767035 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:37.767046 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:37.767056 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:37.767066 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:37.767077 | orchestrator | 2025-06-22 19:38:37.767088 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-22 19:38:37.767098 | orchestrator | Sunday 22 June 2025 19:38:32 +0000 (0:00:01.611) 0:05:03.036 *********** 2025-06-22 19:38:37.767109 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:37.767120 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:37.767130 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:37.767141 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:37.767151 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:37.767162 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:37.767172 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:37.767183 | orchestrator | 2025-06-22 19:38:37.767207 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-22 19:38:37.767228 | orchestrator | Sunday 22 June 2025 19:38:37 +0000 (0:00:05.569) 0:05:08.605 *********** 2025-06-22 19:38:48.546076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:38:48.546168 | orchestrator | 2025-06-22 19:38:48.546186 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-22 19:38:48.546199 | orchestrator | Sunday 22 June 2025 19:38:38 +0000 (0:00:00.350) 0:05:08.955 *********** 2025-06-22 19:38:48.546210 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:48.546222 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:48.546233 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:48.546244 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:48.546255 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:48.546265 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:48.546276 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:48.546286 | orchestrator | 2025-06-22 19:38:48.546297 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-22 19:38:48.546308 | orchestrator | Sunday 22 June 2025 19:38:38 +0000 (0:00:00.739) 0:05:09.694 *********** 2025-06-22 19:38:48.546343 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:48.546356 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:48.546367 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:48.546377 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:48.546388 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:48.546398 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:48.546423 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:48.546434 | orchestrator | 2025-06-22 19:38:48.546445 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-22 19:38:48.546456 | orchestrator | Sunday 22 June 2025 19:38:40 +0000 (0:00:01.605) 0:05:11.300 *********** 2025-06-22 19:38:48.546467 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:48.546477 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:48.546488 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:48.546498 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:48.546508 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:48.546519 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:48.546529 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:48.546539 | orchestrator | 2025-06-22 19:38:48.546550 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-22 19:38:48.546597 | orchestrator | Sunday 22 June 2025 19:38:41 +0000 (0:00:00.743) 0:05:12.043 *********** 2025-06-22 19:38:48.546609 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:48.546621 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:48.546633 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:48.546645 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:48.546657 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:48.546669 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:48.546680 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:48.546692 | orchestrator | 2025-06-22 19:38:48.546704 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-22 19:38:48.546716 | orchestrator | Sunday 22 June 2025 19:38:41 +0000 (0:00:00.281) 0:05:12.325 *********** 2025-06-22 19:38:48.546728 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:48.546740 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:48.546752 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:48.546764 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:48.546775 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:48.546787 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:48.546798 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:48.546810 | orchestrator | 2025-06-22 19:38:48.546823 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-22 19:38:48.546835 | orchestrator | Sunday 22 June 2025 19:38:41 +0000 (0:00:00.422) 0:05:12.747 *********** 2025-06-22 19:38:48.546847 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:48.546858 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:48.546870 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:48.546882 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:48.546894 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:48.546906 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:48.546917 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:48.546930 | orchestrator | 2025-06-22 19:38:48.546941 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-22 19:38:48.546952 | orchestrator | Sunday 22 June 2025 19:38:42 +0000 (0:00:00.323) 0:05:13.071 *********** 2025-06-22 19:38:48.546963 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:48.546973 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:48.546984 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:48.546999 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:48.547010 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:48.547020 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:48.547031 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:48.547041 | orchestrator | 2025-06-22 19:38:48.547052 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-22 19:38:48.547071 | orchestrator | Sunday 22 June 2025 19:38:42 +0000 (0:00:00.294) 0:05:13.365 *********** 2025-06-22 19:38:48.547082 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:48.547092 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:48.547103 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:48.547114 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:48.547124 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:48.547135 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:48.547145 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:48.547156 | orchestrator | 2025-06-22 19:38:48.547167 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-22 19:38:48.547178 | orchestrator | Sunday 22 June 2025 19:38:42 +0000 (0:00:00.321) 0:05:13.686 *********** 2025-06-22 19:38:48.547189 | orchestrator | ok: [testbed-manager] =>  2025-06-22 19:38:48.547199 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:38:48.547210 | orchestrator | ok: [testbed-node-3] =>  2025-06-22 19:38:48.547220 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:38:48.547231 | orchestrator | ok: [testbed-node-4] =>  2025-06-22 19:38:48.547242 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:38:48.547252 | orchestrator | ok: [testbed-node-5] =>  2025-06-22 19:38:48.547263 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:38:48.547274 | orchestrator | ok: [testbed-node-0] =>  2025-06-22 19:38:48.547284 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:38:48.547312 | orchestrator | ok: [testbed-node-1] =>  2025-06-22 19:38:48.547323 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:38:48.547334 | orchestrator | ok: [testbed-node-2] =>  2025-06-22 19:38:48.547344 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:38:48.547355 | orchestrator | 2025-06-22 19:38:48.547366 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-22 19:38:48.547376 | orchestrator | Sunday 22 June 2025 19:38:43 +0000 (0:00:00.277) 0:05:13.964 *********** 2025-06-22 19:38:48.547387 | orchestrator | ok: [testbed-manager] =>  2025-06-22 19:38:48.547398 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:38:48.547408 | orchestrator | ok: [testbed-node-3] =>  2025-06-22 19:38:48.547419 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:38:48.547429 | orchestrator | ok: [testbed-node-4] =>  2025-06-22 19:38:48.547440 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:38:48.547450 | orchestrator | ok: [testbed-node-5] =>  2025-06-22 19:38:48.547461 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:38:48.547471 | orchestrator | ok: [testbed-node-0] =>  2025-06-22 19:38:48.547482 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:38:48.547492 | orchestrator | ok: [testbed-node-1] =>  2025-06-22 19:38:48.547503 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:38:48.547513 | orchestrator | ok: [testbed-node-2] =>  2025-06-22 19:38:48.547524 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:38:48.547535 | orchestrator | 2025-06-22 19:38:48.547545 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-22 19:38:48.547609 | orchestrator | Sunday 22 June 2025 19:38:43 +0000 (0:00:00.384) 0:05:14.349 *********** 2025-06-22 19:38:48.547622 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:48.547633 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:48.547645 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:48.547664 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:48.547682 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:48.547700 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:48.547718 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:48.547735 | orchestrator | 2025-06-22 19:38:48.547752 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-22 19:38:48.547770 | orchestrator | Sunday 22 June 2025 19:38:43 +0000 (0:00:00.250) 0:05:14.599 *********** 2025-06-22 19:38:48.547788 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:48.547807 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:48.547839 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:48.547857 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:48.547868 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:48.547879 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:48.547889 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:48.547900 | orchestrator | 2025-06-22 19:38:48.547910 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-22 19:38:48.547921 | orchestrator | Sunday 22 June 2025 19:38:44 +0000 (0:00:00.259) 0:05:14.858 *********** 2025-06-22 19:38:48.547934 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:38:48.547947 | orchestrator | 2025-06-22 19:38:48.547958 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-22 19:38:48.547968 | orchestrator | Sunday 22 June 2025 19:38:44 +0000 (0:00:00.402) 0:05:15.260 *********** 2025-06-22 19:38:48.547979 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:48.547990 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:48.548000 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:48.548011 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:48.548022 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:48.548037 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:48.548056 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:48.548075 | orchestrator | 2025-06-22 19:38:48.548087 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-22 19:38:48.548097 | orchestrator | Sunday 22 June 2025 19:38:45 +0000 (0:00:00.818) 0:05:16.078 *********** 2025-06-22 19:38:48.548108 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:48.548118 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:48.548129 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:48.548139 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:48.548150 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:48.548160 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:48.548171 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:48.548181 | orchestrator | 2025-06-22 19:38:48.548192 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-22 19:38:48.548210 | orchestrator | Sunday 22 June 2025 19:38:47 +0000 (0:00:02.743) 0:05:18.822 *********** 2025-06-22 19:38:48.548222 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-22 19:38:48.548233 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-22 19:38:48.548243 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-22 19:38:48.548254 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-22 19:38:48.548264 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-22 19:38:48.548275 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-22 19:38:48.548285 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:48.548295 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-22 19:38:48.548306 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-22 19:38:48.548316 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:48.548327 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-22 19:38:48.548337 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-22 19:38:48.548347 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-22 19:38:48.548358 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-22 19:38:48.548369 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:48.548379 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-22 19:38:48.548390 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-22 19:38:48.548410 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-22 19:39:47.209329 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:47.209439 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-22 19:39:47.209446 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-22 19:39:47.209450 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-22 19:39:47.209454 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:47.209458 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:47.209462 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-22 19:39:47.209466 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-22 19:39:47.209470 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-22 19:39:47.209473 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:47.209477 | orchestrator | 2025-06-22 19:39:47.209482 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-22 19:39:47.209487 | orchestrator | Sunday 22 June 2025 19:38:48 +0000 (0:00:00.774) 0:05:19.596 *********** 2025-06-22 19:39:47.209491 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:47.209495 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.209499 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.209502 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.209506 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.209510 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.209513 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.209517 | orchestrator | 2025-06-22 19:39:47.209521 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-22 19:39:47.209525 | orchestrator | Sunday 22 June 2025 19:38:54 +0000 (0:00:06.253) 0:05:25.850 *********** 2025-06-22 19:39:47.209528 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:47.209532 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.209536 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.209540 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.209544 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.209547 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.209551 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.209554 | orchestrator | 2025-06-22 19:39:47.209583 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-22 19:39:47.209587 | orchestrator | Sunday 22 June 2025 19:38:55 +0000 (0:00:00.997) 0:05:26.847 *********** 2025-06-22 19:39:47.209591 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:47.209595 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.209599 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.209602 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.209606 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.209609 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.209613 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.209617 | orchestrator | 2025-06-22 19:39:47.209620 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-22 19:39:47.209624 | orchestrator | Sunday 22 June 2025 19:39:03 +0000 (0:00:07.349) 0:05:34.197 *********** 2025-06-22 19:39:47.209628 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:47.209632 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.209636 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.209639 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.209643 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.209646 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.209650 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.209654 | orchestrator | 2025-06-22 19:39:47.209658 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-22 19:39:47.209661 | orchestrator | Sunday 22 June 2025 19:39:06 +0000 (0:00:03.176) 0:05:37.373 *********** 2025-06-22 19:39:47.209665 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:47.209669 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.209672 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.209676 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.209690 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.209695 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.209698 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.209702 | orchestrator | 2025-06-22 19:39:47.209706 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-22 19:39:47.209710 | orchestrator | Sunday 22 June 2025 19:39:08 +0000 (0:00:01.616) 0:05:38.990 *********** 2025-06-22 19:39:47.209713 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:47.209717 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.209721 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.209724 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.209728 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.209732 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.209751 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.209755 | orchestrator | 2025-06-22 19:39:47.209759 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-22 19:39:47.209763 | orchestrator | Sunday 22 June 2025 19:39:09 +0000 (0:00:01.295) 0:05:40.285 *********** 2025-06-22 19:39:47.209767 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:47.209770 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:47.209774 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:47.209778 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:47.209781 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:47.209785 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:47.209789 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:47.209792 | orchestrator | 2025-06-22 19:39:47.209796 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-22 19:39:47.209800 | orchestrator | Sunday 22 June 2025 19:39:10 +0000 (0:00:00.585) 0:05:40.870 *********** 2025-06-22 19:39:47.209804 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:47.209807 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.209811 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.209815 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.209818 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.209822 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.209825 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.209829 | orchestrator | 2025-06-22 19:39:47.209833 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-22 19:39:47.209837 | orchestrator | Sunday 22 June 2025 19:39:19 +0000 (0:00:09.666) 0:05:50.537 *********** 2025-06-22 19:39:47.209840 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:47.209854 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.209859 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.209863 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.209867 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.209879 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.209883 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.209893 | orchestrator | 2025-06-22 19:39:47.209897 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-22 19:39:47.209901 | orchestrator | Sunday 22 June 2025 19:39:20 +0000 (0:00:00.919) 0:05:51.456 *********** 2025-06-22 19:39:47.209905 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:47.209909 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.209914 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.209918 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.209922 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.209926 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.209930 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.209934 | orchestrator | 2025-06-22 19:39:47.209938 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-22 19:39:47.209942 | orchestrator | Sunday 22 June 2025 19:39:29 +0000 (0:00:08.818) 0:06:00.274 *********** 2025-06-22 19:39:47.209947 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:47.209951 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.209958 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.209962 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.209967 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.209971 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.209975 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.209978 | orchestrator | 2025-06-22 19:39:47.209983 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-22 19:39:47.209987 | orchestrator | Sunday 22 June 2025 19:39:40 +0000 (0:00:11.381) 0:06:11.655 *********** 2025-06-22 19:39:47.209991 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-22 19:39:47.209995 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-22 19:39:47.209999 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-22 19:39:47.210003 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-22 19:39:47.210007 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-22 19:39:47.210012 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-22 19:39:47.210050 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-22 19:39:47.210055 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-22 19:39:47.210059 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-22 19:39:47.210063 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-22 19:39:47.210067 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-22 19:39:47.210071 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-22 19:39:47.210075 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-22 19:39:47.210079 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-22 19:39:47.210083 | orchestrator | 2025-06-22 19:39:47.210087 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-22 19:39:47.210092 | orchestrator | Sunday 22 June 2025 19:39:41 +0000 (0:00:01.190) 0:06:12.846 *********** 2025-06-22 19:39:47.210096 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:47.210100 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:47.210104 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:47.210108 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:47.210112 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:47.210116 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:47.210120 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:47.210124 | orchestrator | 2025-06-22 19:39:47.210128 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-22 19:39:47.210133 | orchestrator | Sunday 22 June 2025 19:39:42 +0000 (0:00:00.509) 0:06:13.355 *********** 2025-06-22 19:39:47.210137 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:47.210141 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.210145 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.210149 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.210154 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.210158 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.210162 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.210166 | orchestrator | 2025-06-22 19:39:47.210170 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-22 19:39:47.210179 | orchestrator | Sunday 22 June 2025 19:39:46 +0000 (0:00:03.841) 0:06:17.197 *********** 2025-06-22 19:39:47.210184 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:47.210188 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:47.210192 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:47.210196 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:47.210200 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:47.210204 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:47.210209 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:47.210213 | orchestrator | 2025-06-22 19:39:47.210217 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-22 19:39:47.210224 | orchestrator | Sunday 22 June 2025 19:39:46 +0000 (0:00:00.499) 0:06:17.696 *********** 2025-06-22 19:39:47.210228 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-22 19:39:47.210232 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-22 19:39:47.210235 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:47.210239 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-22 19:39:47.210243 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-22 19:39:47.210246 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-22 19:39:47.210250 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-22 19:39:47.210253 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:47.210257 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-22 19:39:47.210261 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-22 19:39:47.210267 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:05.909114 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-22 19:40:05.909225 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-22 19:40:05.909239 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:05.909250 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-22 19:40:05.909261 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-22 19:40:05.909271 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:05.909282 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:05.909293 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-22 19:40:05.909303 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-22 19:40:05.909314 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:05.909325 | orchestrator | 2025-06-22 19:40:05.909337 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-22 19:40:05.909349 | orchestrator | Sunday 22 June 2025 19:39:47 +0000 (0:00:00.544) 0:06:18.241 *********** 2025-06-22 19:40:05.909361 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:05.909372 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:05.909383 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:05.909393 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:05.909404 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:05.909414 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:05.909425 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:05.909435 | orchestrator | 2025-06-22 19:40:05.909446 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-22 19:40:05.909457 | orchestrator | Sunday 22 June 2025 19:39:47 +0000 (0:00:00.494) 0:06:18.736 *********** 2025-06-22 19:40:05.909468 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:05.909479 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:05.909489 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:05.909500 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:05.909511 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:05.909521 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:05.909532 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:05.909542 | orchestrator | 2025-06-22 19:40:05.909553 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-22 19:40:05.909614 | orchestrator | Sunday 22 June 2025 19:39:48 +0000 (0:00:00.495) 0:06:19.231 *********** 2025-06-22 19:40:05.909627 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:05.909637 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:05.909649 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:05.909662 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:05.909673 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:05.909685 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:05.909697 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:05.909734 | orchestrator | 2025-06-22 19:40:05.909747 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-22 19:40:05.909759 | orchestrator | Sunday 22 June 2025 19:39:49 +0000 (0:00:00.684) 0:06:19.915 *********** 2025-06-22 19:40:05.909772 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:05.909784 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:05.909796 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:05.909809 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:05.909821 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:05.909832 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:05.909844 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:05.909856 | orchestrator | 2025-06-22 19:40:05.909868 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-22 19:40:05.909881 | orchestrator | Sunday 22 June 2025 19:39:50 +0000 (0:00:01.708) 0:06:21.624 *********** 2025-06-22 19:40:05.909895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:40:05.909910 | orchestrator | 2025-06-22 19:40:05.909922 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-22 19:40:05.909934 | orchestrator | Sunday 22 June 2025 19:39:51 +0000 (0:00:00.835) 0:06:22.459 *********** 2025-06-22 19:40:05.909947 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:05.909959 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:05.909971 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:05.909983 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:05.909996 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:05.910007 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:05.910092 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:05.910108 | orchestrator | 2025-06-22 19:40:05.910119 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-22 19:40:05.910130 | orchestrator | Sunday 22 June 2025 19:39:52 +0000 (0:00:00.845) 0:06:23.305 *********** 2025-06-22 19:40:05.910141 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:05.910151 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:05.910162 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:05.910173 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:05.910183 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:05.910193 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:05.910204 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:05.910214 | orchestrator | 2025-06-22 19:40:05.910224 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-22 19:40:05.910236 | orchestrator | Sunday 22 June 2025 19:39:53 +0000 (0:00:01.034) 0:06:24.339 *********** 2025-06-22 19:40:05.910246 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:05.910257 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:05.910267 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:05.910278 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:05.910288 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:05.910299 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:05.910309 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:05.910320 | orchestrator | 2025-06-22 19:40:05.910330 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-22 19:40:05.910341 | orchestrator | Sunday 22 June 2025 19:39:54 +0000 (0:00:01.321) 0:06:25.660 *********** 2025-06-22 19:40:05.910370 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:05.910382 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:05.910392 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:05.910403 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:05.910413 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:05.910424 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:05.910434 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:05.910445 | orchestrator | 2025-06-22 19:40:05.910456 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-22 19:40:05.910531 | orchestrator | Sunday 22 June 2025 19:39:56 +0000 (0:00:01.332) 0:06:26.992 *********** 2025-06-22 19:40:05.910544 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:05.910554 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:05.910584 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:05.910595 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:05.910606 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:05.910616 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:05.910627 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:05.910637 | orchestrator | 2025-06-22 19:40:05.910648 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-22 19:40:05.910659 | orchestrator | Sunday 22 June 2025 19:39:57 +0000 (0:00:01.292) 0:06:28.285 *********** 2025-06-22 19:40:05.910669 | orchestrator | changed: [testbed-manager] 2025-06-22 19:40:05.910680 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:05.910691 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:05.910701 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:05.910712 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:05.910722 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:05.910732 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:05.910743 | orchestrator | 2025-06-22 19:40:05.910754 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-22 19:40:05.910765 | orchestrator | Sunday 22 June 2025 19:39:58 +0000 (0:00:01.377) 0:06:29.662 *********** 2025-06-22 19:40:05.910776 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:40:05.910787 | orchestrator | 2025-06-22 19:40:05.910797 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-22 19:40:05.910808 | orchestrator | Sunday 22 June 2025 19:39:59 +0000 (0:00:00.998) 0:06:30.660 *********** 2025-06-22 19:40:05.910819 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:05.910830 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:05.910840 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:05.910851 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:05.910862 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:05.910872 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:05.910882 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:05.910893 | orchestrator | 2025-06-22 19:40:05.910904 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-22 19:40:05.910915 | orchestrator | Sunday 22 June 2025 19:40:01 +0000 (0:00:01.338) 0:06:31.998 *********** 2025-06-22 19:40:05.910926 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:05.910936 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:05.910947 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:05.910957 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:05.910968 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:05.910978 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:05.910989 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:05.910999 | orchestrator | 2025-06-22 19:40:05.911010 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-22 19:40:05.911021 | orchestrator | Sunday 22 June 2025 19:40:02 +0000 (0:00:01.107) 0:06:33.106 *********** 2025-06-22 19:40:05.911032 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:05.911042 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:05.911053 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:05.911063 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:05.911073 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:05.911084 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:05.911094 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:05.911111 | orchestrator | 2025-06-22 19:40:05.911129 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-22 19:40:05.911148 | orchestrator | Sunday 22 June 2025 19:40:03 +0000 (0:00:01.354) 0:06:34.461 *********** 2025-06-22 19:40:05.911178 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:05.911198 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:05.911219 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:05.911240 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:05.911260 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:05.911276 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:05.911287 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:05.911297 | orchestrator | 2025-06-22 19:40:05.911315 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-22 19:40:05.911326 | orchestrator | Sunday 22 June 2025 19:40:04 +0000 (0:00:01.120) 0:06:35.581 *********** 2025-06-22 19:40:05.911337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:40:05.911348 | orchestrator | 2025-06-22 19:40:05.911359 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:40:05.911370 | orchestrator | Sunday 22 June 2025 19:40:05 +0000 (0:00:00.874) 0:06:36.455 *********** 2025-06-22 19:40:05.911381 | orchestrator | 2025-06-22 19:40:05.911392 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:40:05.911403 | orchestrator | Sunday 22 June 2025 19:40:05 +0000 (0:00:00.038) 0:06:36.494 *********** 2025-06-22 19:40:05.911414 | orchestrator | 2025-06-22 19:40:05.911424 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:40:05.911435 | orchestrator | Sunday 22 June 2025 19:40:05 +0000 (0:00:00.037) 0:06:36.531 *********** 2025-06-22 19:40:05.911446 | orchestrator | 2025-06-22 19:40:05.911457 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:40:05.911468 | orchestrator | Sunday 22 June 2025 19:40:05 +0000 (0:00:00.046) 0:06:36.578 *********** 2025-06-22 19:40:05.911478 | orchestrator | 2025-06-22 19:40:05.911498 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:40:31.305306 | orchestrator | Sunday 22 June 2025 19:40:05 +0000 (0:00:00.040) 0:06:36.618 *********** 2025-06-22 19:40:31.305416 | orchestrator | 2025-06-22 19:40:31.305437 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:40:31.305455 | orchestrator | Sunday 22 June 2025 19:40:05 +0000 (0:00:00.039) 0:06:36.658 *********** 2025-06-22 19:40:31.305472 | orchestrator | 2025-06-22 19:40:31.305491 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:40:31.305509 | orchestrator | Sunday 22 June 2025 19:40:05 +0000 (0:00:00.047) 0:06:36.706 *********** 2025-06-22 19:40:31.305525 | orchestrator | 2025-06-22 19:40:31.305542 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-22 19:40:31.305559 | orchestrator | Sunday 22 June 2025 19:40:05 +0000 (0:00:00.039) 0:06:36.745 *********** 2025-06-22 19:40:31.305616 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:31.305633 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:31.305648 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:31.305665 | orchestrator | 2025-06-22 19:40:31.305682 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-22 19:40:31.305696 | orchestrator | Sunday 22 June 2025 19:40:07 +0000 (0:00:01.284) 0:06:38.030 *********** 2025-06-22 19:40:31.305713 | orchestrator | changed: [testbed-manager] 2025-06-22 19:40:31.305731 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:31.305747 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:31.305764 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:31.305780 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:31.305797 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:31.305813 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:31.305830 | orchestrator | 2025-06-22 19:40:31.305847 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-22 19:40:31.305865 | orchestrator | Sunday 22 June 2025 19:40:08 +0000 (0:00:01.314) 0:06:39.345 *********** 2025-06-22 19:40:31.305905 | orchestrator | changed: [testbed-manager] 2025-06-22 19:40:31.305924 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:31.305942 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:31.305959 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:31.305976 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:31.306001 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:31.306079 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:31.306098 | orchestrator | 2025-06-22 19:40:31.306118 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-22 19:40:31.306138 | orchestrator | Sunday 22 June 2025 19:40:09 +0000 (0:00:01.119) 0:06:40.465 *********** 2025-06-22 19:40:31.306157 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:31.306171 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:31.306182 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:31.306191 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:31.306200 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:31.306210 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:31.306219 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:31.306228 | orchestrator | 2025-06-22 19:40:31.306238 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-22 19:40:31.306248 | orchestrator | Sunday 22 June 2025 19:40:11 +0000 (0:00:02.276) 0:06:42.741 *********** 2025-06-22 19:40:31.306257 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:31.306267 | orchestrator | 2025-06-22 19:40:31.306276 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-22 19:40:31.306286 | orchestrator | Sunday 22 June 2025 19:40:11 +0000 (0:00:00.105) 0:06:42.847 *********** 2025-06-22 19:40:31.306296 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:31.306305 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:31.306315 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:31.306324 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:31.306333 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:31.306343 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:31.306352 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:31.306361 | orchestrator | 2025-06-22 19:40:31.306371 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-22 19:40:31.306381 | orchestrator | Sunday 22 June 2025 19:40:12 +0000 (0:00:00.989) 0:06:43.837 *********** 2025-06-22 19:40:31.306391 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:31.306400 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:31.306410 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:31.306419 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:31.306428 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:31.306438 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:31.306459 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:31.306468 | orchestrator | 2025-06-22 19:40:31.306478 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-22 19:40:31.306488 | orchestrator | Sunday 22 June 2025 19:40:13 +0000 (0:00:00.671) 0:06:44.508 *********** 2025-06-22 19:40:31.306499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:40:31.306510 | orchestrator | 2025-06-22 19:40:31.306520 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-22 19:40:31.306530 | orchestrator | Sunday 22 June 2025 19:40:14 +0000 (0:00:00.951) 0:06:45.460 *********** 2025-06-22 19:40:31.306539 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:31.306549 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:31.306558 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:31.306588 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:31.306598 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:31.306607 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:31.306632 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:31.306642 | orchestrator | 2025-06-22 19:40:31.306652 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-22 19:40:31.306661 | orchestrator | Sunday 22 June 2025 19:40:15 +0000 (0:00:00.840) 0:06:46.300 *********** 2025-06-22 19:40:31.306671 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-22 19:40:31.306681 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-22 19:40:31.306709 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-22 19:40:31.306719 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-22 19:40:31.306729 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-22 19:40:31.306738 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-22 19:40:31.306748 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-22 19:40:31.306758 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-22 19:40:31.306767 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-22 19:40:31.306777 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-22 19:40:31.306786 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-22 19:40:31.306795 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-22 19:40:31.306805 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-22 19:40:31.306814 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-22 19:40:31.306823 | orchestrator | 2025-06-22 19:40:31.306833 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-22 19:40:31.306843 | orchestrator | Sunday 22 June 2025 19:40:18 +0000 (0:00:02.613) 0:06:48.914 *********** 2025-06-22 19:40:31.306852 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:31.306862 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:31.306871 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:31.306880 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:31.306890 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:31.306899 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:31.306908 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:31.306918 | orchestrator | 2025-06-22 19:40:31.306927 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-22 19:40:31.306937 | orchestrator | Sunday 22 June 2025 19:40:18 +0000 (0:00:00.503) 0:06:49.418 *********** 2025-06-22 19:40:31.306947 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:40:31.306978 | orchestrator | 2025-06-22 19:40:31.306988 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-22 19:40:31.306997 | orchestrator | Sunday 22 June 2025 19:40:19 +0000 (0:00:00.746) 0:06:50.164 *********** 2025-06-22 19:40:31.307007 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:31.307017 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:31.307026 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:31.307036 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:31.307045 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:31.307054 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:31.307064 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:31.307073 | orchestrator | 2025-06-22 19:40:31.307083 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-22 19:40:31.307093 | orchestrator | Sunday 22 June 2025 19:40:20 +0000 (0:00:01.044) 0:06:51.209 *********** 2025-06-22 19:40:31.307102 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:31.307112 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:31.307121 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:31.307130 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:31.307140 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:31.307155 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:31.307165 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:31.307174 | orchestrator | 2025-06-22 19:40:31.307184 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-22 19:40:31.307194 | orchestrator | Sunday 22 June 2025 19:40:21 +0000 (0:00:00.807) 0:06:52.016 *********** 2025-06-22 19:40:31.307204 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:31.307213 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:31.307223 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:31.307232 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:31.307241 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:31.307251 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:31.307260 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:31.307270 | orchestrator | 2025-06-22 19:40:31.307279 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-22 19:40:31.307289 | orchestrator | Sunday 22 June 2025 19:40:21 +0000 (0:00:00.507) 0:06:52.524 *********** 2025-06-22 19:40:31.307303 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:31.307313 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:31.307322 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:31.307332 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:31.307341 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:31.307350 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:31.307360 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:31.307369 | orchestrator | 2025-06-22 19:40:31.307379 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-22 19:40:31.307388 | orchestrator | Sunday 22 June 2025 19:40:23 +0000 (0:00:01.395) 0:06:53.920 *********** 2025-06-22 19:40:31.307398 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:31.307407 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:31.307417 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:31.307426 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:31.307436 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:31.307445 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:31.307454 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:31.307463 | orchestrator | 2025-06-22 19:40:31.307473 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-22 19:40:31.307483 | orchestrator | Sunday 22 June 2025 19:40:23 +0000 (0:00:00.479) 0:06:54.399 *********** 2025-06-22 19:40:31.307492 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:31.307502 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:31.307511 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:31.307520 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:31.307530 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:31.307539 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:31.307548 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:31.307558 | orchestrator | 2025-06-22 19:40:31.307598 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-22 19:41:03.670683 | orchestrator | Sunday 22 June 2025 19:40:31 +0000 (0:00:07.748) 0:07:02.148 *********** 2025-06-22 19:41:03.670769 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:03.670788 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:03.670794 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:03.670800 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:03.670806 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:03.670857 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:03.670889 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:03.670898 | orchestrator | 2025-06-22 19:41:03.670908 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-22 19:41:03.670916 | orchestrator | Sunday 22 June 2025 19:40:32 +0000 (0:00:01.222) 0:07:03.370 *********** 2025-06-22 19:41:03.670921 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:03.670927 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:03.670936 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:03.670969 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:03.670976 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:03.670981 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:03.670987 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:03.670992 | orchestrator | 2025-06-22 19:41:03.670998 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-22 19:41:03.671003 | orchestrator | Sunday 22 June 2025 19:40:34 +0000 (0:00:01.699) 0:07:05.070 *********** 2025-06-22 19:41:03.671009 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:03.671014 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:03.671019 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:03.671025 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:03.671030 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:03.671036 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:03.671041 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:03.671046 | orchestrator | 2025-06-22 19:41:03.671051 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 19:41:03.671057 | orchestrator | Sunday 22 June 2025 19:40:35 +0000 (0:00:01.647) 0:07:06.718 *********** 2025-06-22 19:41:03.671062 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:03.671067 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:03.671073 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:03.671078 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:03.671083 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:03.671089 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:03.671094 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:03.671099 | orchestrator | 2025-06-22 19:41:03.671104 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 19:41:03.671110 | orchestrator | Sunday 22 June 2025 19:40:37 +0000 (0:00:01.173) 0:07:07.891 *********** 2025-06-22 19:41:03.671115 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:41:03.671120 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:41:03.671125 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:41:03.671131 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:41:03.671136 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:41:03.671141 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:41:03.671146 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:41:03.671152 | orchestrator | 2025-06-22 19:41:03.671157 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-22 19:41:03.671162 | orchestrator | Sunday 22 June 2025 19:40:37 +0000 (0:00:00.798) 0:07:08.689 *********** 2025-06-22 19:41:03.671168 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:41:03.671173 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:41:03.671178 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:41:03.671184 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:41:03.671189 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:41:03.671194 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:41:03.671199 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:41:03.671204 | orchestrator | 2025-06-22 19:41:03.671210 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-22 19:41:03.671215 | orchestrator | Sunday 22 June 2025 19:40:38 +0000 (0:00:00.514) 0:07:09.204 *********** 2025-06-22 19:41:03.671220 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:03.671226 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:03.671231 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:03.671236 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:03.671242 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:03.671248 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:03.671254 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:03.671260 | orchestrator | 2025-06-22 19:41:03.671266 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-22 19:41:03.671283 | orchestrator | Sunday 22 June 2025 19:40:39 +0000 (0:00:00.679) 0:07:09.883 *********** 2025-06-22 19:41:03.671290 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:03.671302 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:03.671308 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:03.671314 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:03.671320 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:03.671326 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:03.671332 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:03.671338 | orchestrator | 2025-06-22 19:41:03.671344 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-22 19:41:03.671350 | orchestrator | Sunday 22 June 2025 19:40:39 +0000 (0:00:00.517) 0:07:10.401 *********** 2025-06-22 19:41:03.671356 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:03.671362 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:03.671368 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:03.671374 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:03.671380 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:03.671386 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:03.671392 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:03.671398 | orchestrator | 2025-06-22 19:41:03.671404 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-22 19:41:03.671410 | orchestrator | Sunday 22 June 2025 19:40:40 +0000 (0:00:00.499) 0:07:10.901 *********** 2025-06-22 19:41:03.671416 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:03.671422 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:03.671428 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:03.671434 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:03.671440 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:03.671446 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:03.671452 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:03.671458 | orchestrator | 2025-06-22 19:41:03.671464 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-22 19:41:03.671482 | orchestrator | Sunday 22 June 2025 19:40:45 +0000 (0:00:05.847) 0:07:16.748 *********** 2025-06-22 19:41:03.671488 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:41:03.671494 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:41:03.671500 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:41:03.671506 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:41:03.671512 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:41:03.671518 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:41:03.671524 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:41:03.671530 | orchestrator | 2025-06-22 19:41:03.671537 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-22 19:41:03.671585 | orchestrator | Sunday 22 June 2025 19:40:46 +0000 (0:00:00.513) 0:07:17.261 *********** 2025-06-22 19:41:03.671595 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:41:03.671603 | orchestrator | 2025-06-22 19:41:03.671610 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-22 19:41:03.671616 | orchestrator | Sunday 22 June 2025 19:40:47 +0000 (0:00:00.999) 0:07:18.261 *********** 2025-06-22 19:41:03.671622 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:03.671627 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:03.671632 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:03.671638 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:03.671643 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:03.671648 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:03.671654 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:03.671659 | orchestrator | 2025-06-22 19:41:03.671664 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-22 19:41:03.671670 | orchestrator | Sunday 22 June 2025 19:40:49 +0000 (0:00:01.843) 0:07:20.105 *********** 2025-06-22 19:41:03.671675 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:03.671680 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:03.671686 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:03.671696 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:03.671702 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:03.671707 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:03.671712 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:03.671718 | orchestrator | 2025-06-22 19:41:03.671723 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-22 19:41:03.671729 | orchestrator | Sunday 22 June 2025 19:40:50 +0000 (0:00:01.118) 0:07:21.223 *********** 2025-06-22 19:41:03.671734 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:03.671739 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:03.671745 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:03.671750 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:03.671755 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:03.671760 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:03.671766 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:03.671771 | orchestrator | 2025-06-22 19:41:03.671776 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-22 19:41:03.671782 | orchestrator | Sunday 22 June 2025 19:40:51 +0000 (0:00:01.090) 0:07:22.314 *********** 2025-06-22 19:41:03.671787 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:41:03.671795 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:41:03.671800 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:41:03.671805 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:41:03.671811 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:41:03.671816 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:41:03.671822 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:41:03.671827 | orchestrator | 2025-06-22 19:41:03.671833 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-22 19:41:03.671892 | orchestrator | Sunday 22 June 2025 19:40:53 +0000 (0:00:01.708) 0:07:24.022 *********** 2025-06-22 19:41:03.671900 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:41:03.671905 | orchestrator | 2025-06-22 19:41:03.671911 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-22 19:41:03.671916 | orchestrator | Sunday 22 June 2025 19:40:53 +0000 (0:00:00.779) 0:07:24.802 *********** 2025-06-22 19:41:03.671921 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:03.671927 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:03.671932 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:03.671938 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:03.671943 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:03.671948 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:03.671953 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:03.671958 | orchestrator | 2025-06-22 19:41:03.671964 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-22 19:41:03.671974 | orchestrator | Sunday 22 June 2025 19:41:03 +0000 (0:00:09.705) 0:07:34.508 *********** 2025-06-22 19:41:18.507940 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:18.507998 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:18.508004 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:18.508021 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:18.508026 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:18.508030 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:18.508035 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:18.508039 | orchestrator | 2025-06-22 19:41:18.508044 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-22 19:41:18.508050 | orchestrator | Sunday 22 June 2025 19:41:05 +0000 (0:00:01.701) 0:07:36.209 *********** 2025-06-22 19:41:18.508054 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:18.508059 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:18.508063 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:18.508067 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:18.508071 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:18.508076 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:18.508080 | orchestrator | 2025-06-22 19:41:18.508084 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-22 19:41:18.508089 | orchestrator | Sunday 22 June 2025 19:41:06 +0000 (0:00:01.338) 0:07:37.547 *********** 2025-06-22 19:41:18.508093 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:18.508098 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:18.508102 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:18.508106 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:18.508111 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:18.508115 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:18.508119 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:18.508124 | orchestrator | 2025-06-22 19:41:18.508128 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-22 19:41:18.508132 | orchestrator | 2025-06-22 19:41:18.508137 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-22 19:41:18.508141 | orchestrator | Sunday 22 June 2025 19:41:08 +0000 (0:00:01.436) 0:07:38.983 *********** 2025-06-22 19:41:18.508145 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:41:18.508149 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:41:18.508154 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:41:18.508158 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:41:18.508162 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:41:18.508167 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:41:18.508171 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:41:18.508175 | orchestrator | 2025-06-22 19:41:18.508179 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-22 19:41:18.508184 | orchestrator | 2025-06-22 19:41:18.508188 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-22 19:41:18.508192 | orchestrator | Sunday 22 June 2025 19:41:08 +0000 (0:00:00.508) 0:07:39.492 *********** 2025-06-22 19:41:18.508197 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:18.508201 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:18.508205 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:18.508209 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:18.508213 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:18.508218 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:18.508222 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:18.508227 | orchestrator | 2025-06-22 19:41:18.508231 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-22 19:41:18.508235 | orchestrator | Sunday 22 June 2025 19:41:09 +0000 (0:00:01.334) 0:07:40.827 *********** 2025-06-22 19:41:18.508240 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:18.508244 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:18.508248 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:18.508252 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:18.508257 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:18.508261 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:18.508265 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:18.508269 | orchestrator | 2025-06-22 19:41:18.508274 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-22 19:41:18.508308 | orchestrator | Sunday 22 June 2025 19:41:11 +0000 (0:00:01.423) 0:07:42.250 *********** 2025-06-22 19:41:18.508313 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:41:18.508318 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:41:18.508322 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:41:18.508327 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:41:18.508331 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:41:18.508335 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:41:18.508340 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:41:18.508344 | orchestrator | 2025-06-22 19:41:18.508348 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-22 19:41:18.508360 | orchestrator | Sunday 22 June 2025 19:41:12 +0000 (0:00:00.910) 0:07:43.160 *********** 2025-06-22 19:41:18.508364 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:18.508368 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:18.508372 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:18.508377 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:18.508381 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:18.508385 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:18.508389 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:18.508394 | orchestrator | 2025-06-22 19:41:18.508398 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-22 19:41:18.508402 | orchestrator | 2025-06-22 19:41:18.508407 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-22 19:41:18.508411 | orchestrator | Sunday 22 June 2025 19:41:13 +0000 (0:00:01.128) 0:07:44.289 *********** 2025-06-22 19:41:18.508416 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:41:18.508421 | orchestrator | 2025-06-22 19:41:18.508425 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-22 19:41:18.508430 | orchestrator | Sunday 22 June 2025 19:41:14 +0000 (0:00:00.785) 0:07:45.074 *********** 2025-06-22 19:41:18.508434 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:18.508438 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:18.508443 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:18.508447 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:18.508451 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:18.508456 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:18.508460 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:18.508464 | orchestrator | 2025-06-22 19:41:18.508477 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-22 19:41:18.508482 | orchestrator | Sunday 22 June 2025 19:41:14 +0000 (0:00:00.759) 0:07:45.833 *********** 2025-06-22 19:41:18.508486 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:18.508491 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:18.508495 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:18.508499 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:18.508504 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:18.508508 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:18.508512 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:18.508516 | orchestrator | 2025-06-22 19:41:18.508521 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-22 19:41:18.508552 | orchestrator | Sunday 22 June 2025 19:41:15 +0000 (0:00:00.929) 0:07:46.763 *********** 2025-06-22 19:41:18.508561 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:41:18.508569 | orchestrator | 2025-06-22 19:41:18.508574 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-22 19:41:18.508579 | orchestrator | Sunday 22 June 2025 19:41:16 +0000 (0:00:00.821) 0:07:47.585 *********** 2025-06-22 19:41:18.508584 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:18.508589 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:18.508594 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:18.508602 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:18.508608 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:18.508613 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:18.508617 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:18.508622 | orchestrator | 2025-06-22 19:41:18.508627 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-22 19:41:18.508632 | orchestrator | Sunday 22 June 2025 19:41:17 +0000 (0:00:00.783) 0:07:48.368 *********** 2025-06-22 19:41:18.508638 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:18.508643 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:18.508647 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:18.508652 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:18.508657 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:18.508662 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:18.508667 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:18.508672 | orchestrator | 2025-06-22 19:41:18.508677 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:41:18.508693 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-22 19:41:18.508698 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:41:18.508704 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:41:18.508709 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:41:18.508714 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-22 19:41:18.508719 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:41:18.508724 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:41:18.508729 | orchestrator | 2025-06-22 19:41:18.508734 | orchestrator | 2025-06-22 19:41:18.508739 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:41:18.508745 | orchestrator | Sunday 22 June 2025 19:41:18 +0000 (0:00:00.969) 0:07:49.338 *********** 2025-06-22 19:41:18.508755 | orchestrator | =============================================================================== 2025-06-22 19:41:18.508761 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.89s 2025-06-22 19:41:18.508766 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.38s 2025-06-22 19:41:18.508771 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.09s 2025-06-22 19:41:18.508776 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.41s 2025-06-22 19:41:18.508781 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.06s 2025-06-22 19:41:18.508787 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.88s 2025-06-22 19:41:18.508792 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.38s 2025-06-22 19:41:18.508797 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.71s 2025-06-22 19:41:18.508802 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.67s 2025-06-22 19:41:18.508807 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.82s 2025-06-22 19:41:18.508812 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.50s 2025-06-22 19:41:18.508818 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.12s 2025-06-22 19:41:18.508826 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.96s 2025-06-22 19:41:18.508831 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.75s 2025-06-22 19:41:18.508840 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.51s 2025-06-22 19:41:18.755948 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.35s 2025-06-22 19:41:18.756103 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.25s 2025-06-22 19:41:18.756117 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.85s 2025-06-22 19:41:18.756128 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.75s 2025-06-22 19:41:18.756139 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.74s 2025-06-22 19:41:18.898114 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-22 19:41:18.898223 | orchestrator | + osism apply network 2025-06-22 19:41:20.785601 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:41:20.785741 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:41:20.785756 | orchestrator | Registering Redlock._release_script 2025-06-22 19:41:20.845697 | orchestrator | 2025-06-22 19:41:20 | INFO  | Task 9fe14847-8154-44f2-8ced-c678ec9f6a3d (network) was prepared for execution. 2025-06-22 19:41:20.845775 | orchestrator | 2025-06-22 19:41:20 | INFO  | It takes a moment until task 9fe14847-8154-44f2-8ced-c678ec9f6a3d (network) has been started and output is visible here. 2025-06-22 19:41:48.505056 | orchestrator | 2025-06-22 19:41:48.505135 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-22 19:41:48.505147 | orchestrator | 2025-06-22 19:41:48.505155 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-22 19:41:48.505164 | orchestrator | Sunday 22 June 2025 19:41:24 +0000 (0:00:00.301) 0:00:00.301 *********** 2025-06-22 19:41:48.505172 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:48.505180 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:48.505188 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:48.505196 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:48.505203 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:48.505211 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:48.505219 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:48.505226 | orchestrator | 2025-06-22 19:41:48.505234 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-22 19:41:48.505242 | orchestrator | Sunday 22 June 2025 19:41:25 +0000 (0:00:00.703) 0:00:01.004 *********** 2025-06-22 19:41:48.505251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:41:48.505260 | orchestrator | 2025-06-22 19:41:48.505268 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-22 19:41:48.505276 | orchestrator | Sunday 22 June 2025 19:41:26 +0000 (0:00:01.188) 0:00:02.193 *********** 2025-06-22 19:41:48.505284 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:48.505292 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:48.505299 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:48.505307 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:48.505315 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:48.505322 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:48.505330 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:48.505338 | orchestrator | 2025-06-22 19:41:48.505345 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-22 19:41:48.505353 | orchestrator | Sunday 22 June 2025 19:41:28 +0000 (0:00:01.986) 0:00:04.179 *********** 2025-06-22 19:41:48.505361 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:48.505369 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:48.505377 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:48.505401 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:48.505410 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:48.505417 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:48.505425 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:48.505432 | orchestrator | 2025-06-22 19:41:48.505440 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-22 19:41:48.505448 | orchestrator | Sunday 22 June 2025 19:41:30 +0000 (0:00:01.936) 0:00:06.116 *********** 2025-06-22 19:41:48.505456 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-22 19:41:48.505464 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-22 19:41:48.505472 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-22 19:41:48.505489 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-22 19:41:48.505521 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-22 19:41:48.505529 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-22 19:41:48.505537 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-22 19:41:48.505544 | orchestrator | 2025-06-22 19:41:48.505553 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-22 19:41:48.505560 | orchestrator | Sunday 22 June 2025 19:41:31 +0000 (0:00:00.963) 0:00:07.079 *********** 2025-06-22 19:41:48.505568 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:41:48.505576 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 19:41:48.505584 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:41:48.505591 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 19:41:48.505599 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 19:41:48.505607 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 19:41:48.505625 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 19:41:48.505640 | orchestrator | 2025-06-22 19:41:48.505649 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-22 19:41:48.505658 | orchestrator | Sunday 22 June 2025 19:41:34 +0000 (0:00:03.313) 0:00:10.393 *********** 2025-06-22 19:41:48.505666 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:48.505675 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:48.505683 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:48.505692 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:48.505700 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:48.505709 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:48.505717 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:48.505725 | orchestrator | 2025-06-22 19:41:48.505734 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-22 19:41:48.505743 | orchestrator | Sunday 22 June 2025 19:41:36 +0000 (0:00:01.505) 0:00:11.899 *********** 2025-06-22 19:41:48.505751 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:41:48.505760 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:41:48.505769 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 19:41:48.505777 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 19:41:48.505784 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 19:41:48.505792 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 19:41:48.505799 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 19:41:48.505807 | orchestrator | 2025-06-22 19:41:48.505815 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-22 19:41:48.505822 | orchestrator | Sunday 22 June 2025 19:41:38 +0000 (0:00:02.048) 0:00:13.947 *********** 2025-06-22 19:41:48.505830 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:48.505838 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:48.505846 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:48.505854 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:48.505861 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:48.505869 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:48.505876 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:48.505884 | orchestrator | 2025-06-22 19:41:48.505892 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-22 19:41:48.505918 | orchestrator | Sunday 22 June 2025 19:41:39 +0000 (0:00:01.091) 0:00:15.039 *********** 2025-06-22 19:41:48.505927 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:41:48.505934 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:41:48.505942 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:41:48.505949 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:41:48.505957 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:41:48.505965 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:41:48.505972 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:41:48.505980 | orchestrator | 2025-06-22 19:41:48.505987 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-22 19:41:48.505995 | orchestrator | Sunday 22 June 2025 19:41:40 +0000 (0:00:00.659) 0:00:15.698 *********** 2025-06-22 19:41:48.506003 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:48.506010 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:48.506054 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:48.506062 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:48.506070 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:48.506077 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:48.506085 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:48.506093 | orchestrator | 2025-06-22 19:41:48.506100 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-22 19:41:48.506108 | orchestrator | Sunday 22 June 2025 19:41:42 +0000 (0:00:02.145) 0:00:17.844 *********** 2025-06-22 19:41:48.506116 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:41:48.506123 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:41:48.506131 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:41:48.506138 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:41:48.506146 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:41:48.506154 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:41:48.506162 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-22 19:41:48.506170 | orchestrator | 2025-06-22 19:41:48.506178 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-22 19:41:48.506186 | orchestrator | Sunday 22 June 2025 19:41:43 +0000 (0:00:00.856) 0:00:18.701 *********** 2025-06-22 19:41:48.506194 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:48.506202 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:48.506209 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:48.506217 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:48.506224 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:48.506232 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:48.506239 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:48.506247 | orchestrator | 2025-06-22 19:41:48.506255 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-22 19:41:48.506262 | orchestrator | Sunday 22 June 2025 19:41:44 +0000 (0:00:01.579) 0:00:20.280 *********** 2025-06-22 19:41:48.506274 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:41:48.506284 | orchestrator | 2025-06-22 19:41:48.506291 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-22 19:41:48.506299 | orchestrator | Sunday 22 June 2025 19:41:45 +0000 (0:00:01.117) 0:00:21.398 *********** 2025-06-22 19:41:48.506307 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:48.506315 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:48.506322 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:48.506330 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:48.506338 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:48.506345 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:48.506353 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:48.506360 | orchestrator | 2025-06-22 19:41:48.506368 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-22 19:41:48.506381 | orchestrator | Sunday 22 June 2025 19:41:46 +0000 (0:00:00.903) 0:00:22.301 *********** 2025-06-22 19:41:48.506389 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:48.506396 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:48.506404 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:48.506412 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:48.506419 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:48.506427 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:48.506434 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:48.506442 | orchestrator | 2025-06-22 19:41:48.506450 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-22 19:41:48.506458 | orchestrator | Sunday 22 June 2025 19:41:47 +0000 (0:00:00.685) 0:00:22.987 *********** 2025-06-22 19:41:48.506465 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:41:48.506473 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:41:48.506481 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:41:48.506488 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:41:48.506521 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:41:48.506529 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:41:48.506537 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:41:48.506544 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:41:48.506552 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:41:48.506559 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:41:48.506567 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:41:48.506575 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:41:48.506582 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:41:48.506590 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:41:48.506598 | orchestrator | 2025-06-22 19:41:48.506611 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-22 19:42:03.530271 | orchestrator | Sunday 22 June 2025 19:41:48 +0000 (0:00:01.042) 0:00:24.030 *********** 2025-06-22 19:42:03.530378 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:42:03.530394 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:42:03.530405 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:42:03.530416 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:42:03.530427 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:42:03.530438 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:42:03.530448 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:42:03.530459 | orchestrator | 2025-06-22 19:42:03.530471 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-22 19:42:03.530482 | orchestrator | Sunday 22 June 2025 19:41:49 +0000 (0:00:00.544) 0:00:24.574 *********** 2025-06-22 19:42:03.530536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-2, testbed-node-0, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:42:03.530550 | orchestrator | 2025-06-22 19:42:03.530561 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-22 19:42:03.530572 | orchestrator | Sunday 22 June 2025 19:41:53 +0000 (0:00:04.132) 0:00:28.706 *********** 2025-06-22 19:42:03.530585 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530662 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:03.530674 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530685 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:03.530729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:03.530740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:03.530770 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:03.530782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:03.530792 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:03.530803 | orchestrator | 2025-06-22 19:42:03.530817 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-22 19:42:03.530830 | orchestrator | Sunday 22 June 2025 19:41:58 +0000 (0:00:05.035) 0:00:33.742 *********** 2025-06-22 19:42:03.530861 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530889 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530919 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:03.530932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530945 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530957 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:42:03.530970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:03.530983 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:03.530995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:03.531007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:03.531074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:09.440597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:42:09.440731 | orchestrator | 2025-06-22 19:42:09.440747 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-22 19:42:09.440760 | orchestrator | Sunday 22 June 2025 19:42:03 +0000 (0:00:05.304) 0:00:39.047 *********** 2025-06-22 19:42:09.440773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:42:09.440785 | orchestrator | 2025-06-22 19:42:09.440796 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-22 19:42:09.440806 | orchestrator | Sunday 22 June 2025 19:42:04 +0000 (0:00:01.112) 0:00:40.160 *********** 2025-06-22 19:42:09.440817 | orchestrator | ok: [testbed-manager] 2025-06-22 19:42:09.440829 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:42:09.440840 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:42:09.440850 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:42:09.440860 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:42:09.440871 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:42:09.440881 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:42:09.440892 | orchestrator | 2025-06-22 19:42:09.440903 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-22 19:42:09.440913 | orchestrator | Sunday 22 June 2025 19:42:05 +0000 (0:00:01.081) 0:00:41.241 *********** 2025-06-22 19:42:09.440924 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:42:09.440935 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:42:09.440946 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:42:09.440956 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:42:09.440967 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:42:09.440977 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:42:09.440988 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:42:09.441013 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:42:09.441024 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:42:09.441035 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:42:09.441045 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:42:09.441055 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:42:09.441066 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:42:09.441076 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:42:09.441088 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:42:09.441101 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:42:09.441113 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:42:09.441125 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:42:09.441138 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:42:09.441151 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:42:09.441163 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:42:09.441175 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:42:09.441187 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:42:09.441207 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:42:09.441219 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:42:09.441231 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:42:09.441243 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:42:09.441256 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:42:09.441268 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:42:09.441280 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:42:09.441293 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:42:09.441305 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:42:09.441317 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:42:09.441329 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:42:09.441341 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:42:09.441354 | orchestrator | 2025-06-22 19:42:09.441367 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-22 19:42:09.441395 | orchestrator | Sunday 22 June 2025 19:42:07 +0000 (0:00:02.017) 0:00:43.258 *********** 2025-06-22 19:42:09.441408 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:42:09.441420 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:42:09.441433 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:42:09.441444 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:42:09.441455 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:42:09.441465 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:42:09.441476 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:42:09.441505 | orchestrator | 2025-06-22 19:42:09.441516 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-22 19:42:09.441527 | orchestrator | Sunday 22 June 2025 19:42:08 +0000 (0:00:00.643) 0:00:43.902 *********** 2025-06-22 19:42:09.441538 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:42:09.441548 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:42:09.441558 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:42:09.441569 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:42:09.441579 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:42:09.441590 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:42:09.441600 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:42:09.441611 | orchestrator | 2025-06-22 19:42:09.441621 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:42:09.441633 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:42:09.441645 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:42:09.441656 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:42:09.441667 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:42:09.441677 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:42:09.441688 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:42:09.441703 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:42:09.441721 | orchestrator | 2025-06-22 19:42:09.441732 | orchestrator | 2025-06-22 19:42:09.441743 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:42:09.441753 | orchestrator | Sunday 22 June 2025 19:42:09 +0000 (0:00:00.703) 0:00:44.606 *********** 2025-06-22 19:42:09.441764 | orchestrator | =============================================================================== 2025-06-22 19:42:09.441775 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.30s 2025-06-22 19:42:09.441785 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.04s 2025-06-22 19:42:09.441796 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.13s 2025-06-22 19:42:09.441806 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.31s 2025-06-22 19:42:09.441817 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.15s 2025-06-22 19:42:09.441827 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.05s 2025-06-22 19:42:09.441838 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.02s 2025-06-22 19:42:09.441848 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.99s 2025-06-22 19:42:09.441859 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.94s 2025-06-22 19:42:09.441869 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.58s 2025-06-22 19:42:09.441880 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.51s 2025-06-22 19:42:09.441890 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.19s 2025-06-22 19:42:09.441901 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.12s 2025-06-22 19:42:09.441911 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.11s 2025-06-22 19:42:09.441922 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.09s 2025-06-22 19:42:09.441932 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.08s 2025-06-22 19:42:09.441943 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.04s 2025-06-22 19:42:09.441953 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2025-06-22 19:42:09.441963 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.90s 2025-06-22 19:42:09.441974 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.86s 2025-06-22 19:42:09.682455 | orchestrator | + osism apply wireguard 2025-06-22 19:42:11.489301 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:42:11.489392 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:42:11.489405 | orchestrator | Registering Redlock._release_script 2025-06-22 19:42:11.545898 | orchestrator | 2025-06-22 19:42:11 | INFO  | Task ab1be93e-d9c2-4d87-aa69-96df7274d1a1 (wireguard) was prepared for execution. 2025-06-22 19:42:11.545946 | orchestrator | 2025-06-22 19:42:11 | INFO  | It takes a moment until task ab1be93e-d9c2-4d87-aa69-96df7274d1a1 (wireguard) has been started and output is visible here. 2025-06-22 19:42:28.854229 | orchestrator | 2025-06-22 19:42:28.854388 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-22 19:42:28.854406 | orchestrator | 2025-06-22 19:42:28.854418 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-22 19:42:28.854430 | orchestrator | Sunday 22 June 2025 19:42:15 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-06-22 19:42:28.854441 | orchestrator | ok: [testbed-manager] 2025-06-22 19:42:28.854453 | orchestrator | 2025-06-22 19:42:28.854465 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-22 19:42:28.854475 | orchestrator | Sunday 22 June 2025 19:42:16 +0000 (0:00:01.179) 0:00:01.353 *********** 2025-06-22 19:42:28.854524 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:28.854536 | orchestrator | 2025-06-22 19:42:28.854546 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-22 19:42:28.854598 | orchestrator | Sunday 22 June 2025 19:42:21 +0000 (0:00:05.237) 0:00:06.591 *********** 2025-06-22 19:42:28.854611 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:28.854622 | orchestrator | 2025-06-22 19:42:28.854632 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-22 19:42:28.854643 | orchestrator | Sunday 22 June 2025 19:42:22 +0000 (0:00:00.485) 0:00:07.077 *********** 2025-06-22 19:42:28.854653 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:28.854664 | orchestrator | 2025-06-22 19:42:28.854674 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-22 19:42:28.854685 | orchestrator | Sunday 22 June 2025 19:42:22 +0000 (0:00:00.391) 0:00:07.468 *********** 2025-06-22 19:42:28.854695 | orchestrator | ok: [testbed-manager] 2025-06-22 19:42:28.854705 | orchestrator | 2025-06-22 19:42:28.854716 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-22 19:42:28.854726 | orchestrator | Sunday 22 June 2025 19:42:23 +0000 (0:00:00.458) 0:00:07.927 *********** 2025-06-22 19:42:28.854736 | orchestrator | ok: [testbed-manager] 2025-06-22 19:42:28.854747 | orchestrator | 2025-06-22 19:42:28.854758 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-22 19:42:28.854768 | orchestrator | Sunday 22 June 2025 19:42:23 +0000 (0:00:00.462) 0:00:08.390 *********** 2025-06-22 19:42:28.854779 | orchestrator | ok: [testbed-manager] 2025-06-22 19:42:28.854789 | orchestrator | 2025-06-22 19:42:28.854800 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-22 19:42:28.854810 | orchestrator | Sunday 22 June 2025 19:42:24 +0000 (0:00:00.375) 0:00:08.766 *********** 2025-06-22 19:42:28.854820 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:28.854831 | orchestrator | 2025-06-22 19:42:28.854856 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-22 19:42:28.854867 | orchestrator | Sunday 22 June 2025 19:42:25 +0000 (0:00:01.037) 0:00:09.803 *********** 2025-06-22 19:42:28.854878 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:42:28.854888 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:28.854899 | orchestrator | 2025-06-22 19:42:28.854910 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-22 19:42:28.854920 | orchestrator | Sunday 22 June 2025 19:42:25 +0000 (0:00:00.822) 0:00:10.626 *********** 2025-06-22 19:42:28.854930 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:28.854941 | orchestrator | 2025-06-22 19:42:28.854951 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-22 19:42:28.854962 | orchestrator | Sunday 22 June 2025 19:42:27 +0000 (0:00:01.640) 0:00:12.266 *********** 2025-06-22 19:42:28.854972 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:28.854983 | orchestrator | 2025-06-22 19:42:28.854993 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:42:28.855003 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:42:28.855015 | orchestrator | 2025-06-22 19:42:28.855026 | orchestrator | 2025-06-22 19:42:28.855036 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:42:28.855047 | orchestrator | Sunday 22 June 2025 19:42:28 +0000 (0:00:00.925) 0:00:13.192 *********** 2025-06-22 19:42:28.855057 | orchestrator | =============================================================================== 2025-06-22 19:42:28.855068 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.24s 2025-06-22 19:42:28.855078 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.64s 2025-06-22 19:42:28.855089 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.18s 2025-06-22 19:42:28.855099 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.04s 2025-06-22 19:42:28.855110 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.93s 2025-06-22 19:42:28.855128 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.82s 2025-06-22 19:42:28.855139 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.49s 2025-06-22 19:42:28.855149 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2025-06-22 19:42:28.855159 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.46s 2025-06-22 19:42:28.855170 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.39s 2025-06-22 19:42:28.855181 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.38s 2025-06-22 19:42:29.119729 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-22 19:42:29.151634 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-22 19:42:29.151730 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-22 19:42:29.231819 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 174 0 --:--:-- --:--:-- --:--:-- 177 2025-06-22 19:42:29.245583 | orchestrator | + osism apply --environment custom workarounds 2025-06-22 19:42:30.875849 | orchestrator | 2025-06-22 19:42:30 | INFO  | Trying to run play workarounds in environment custom 2025-06-22 19:42:30.879648 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:42:30.879671 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:42:30.879677 | orchestrator | Registering Redlock._release_script 2025-06-22 19:42:30.930079 | orchestrator | 2025-06-22 19:42:30 | INFO  | Task cd6d79d1-03d0-4d42-b79f-cb016799414f (workarounds) was prepared for execution. 2025-06-22 19:42:30.930167 | orchestrator | 2025-06-22 19:42:30 | INFO  | It takes a moment until task cd6d79d1-03d0-4d42-b79f-cb016799414f (workarounds) has been started and output is visible here. 2025-06-22 19:42:55.615468 | orchestrator | 2025-06-22 19:42:55.615624 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:42:55.615641 | orchestrator | 2025-06-22 19:42:55.615653 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-22 19:42:55.615664 | orchestrator | Sunday 22 June 2025 19:42:34 +0000 (0:00:00.148) 0:00:00.148 *********** 2025-06-22 19:42:55.615675 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-22 19:42:55.615686 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-22 19:42:55.615697 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-22 19:42:55.615707 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-22 19:42:55.615718 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-22 19:42:55.615728 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-22 19:42:55.615740 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-22 19:42:55.615751 | orchestrator | 2025-06-22 19:42:55.615761 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-22 19:42:55.615772 | orchestrator | 2025-06-22 19:42:55.615782 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-22 19:42:55.615793 | orchestrator | Sunday 22 June 2025 19:42:35 +0000 (0:00:00.788) 0:00:00.936 *********** 2025-06-22 19:42:55.615804 | orchestrator | ok: [testbed-manager] 2025-06-22 19:42:55.615816 | orchestrator | 2025-06-22 19:42:55.615846 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-22 19:42:55.615865 | orchestrator | 2025-06-22 19:42:55.615884 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-22 19:42:55.615902 | orchestrator | Sunday 22 June 2025 19:42:37 +0000 (0:00:02.279) 0:00:03.216 *********** 2025-06-22 19:42:55.615919 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:42:55.615936 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:42:55.615953 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:42:55.616003 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:42:55.616025 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:42:55.616044 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:42:55.616057 | orchestrator | 2025-06-22 19:42:55.616069 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-22 19:42:55.616081 | orchestrator | 2025-06-22 19:42:55.616093 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-22 19:42:55.616104 | orchestrator | Sunday 22 June 2025 19:42:39 +0000 (0:00:01.902) 0:00:05.119 *********** 2025-06-22 19:42:55.616117 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:42:55.616130 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:42:55.616141 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:42:55.616153 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:42:55.616165 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:42:55.616176 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:42:55.616188 | orchestrator | 2025-06-22 19:42:55.616200 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-22 19:42:55.616212 | orchestrator | Sunday 22 June 2025 19:42:41 +0000 (0:00:01.469) 0:00:06.588 *********** 2025-06-22 19:42:55.616224 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:42:55.616235 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:42:55.616246 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:42:55.616258 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:42:55.616270 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:42:55.616282 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:42:55.616293 | orchestrator | 2025-06-22 19:42:55.616305 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-22 19:42:55.616318 | orchestrator | Sunday 22 June 2025 19:42:44 +0000 (0:00:03.798) 0:00:10.387 *********** 2025-06-22 19:42:55.616329 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:42:55.616340 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:42:55.616350 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:42:55.616360 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:42:55.616371 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:42:55.616381 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:42:55.616392 | orchestrator | 2025-06-22 19:42:55.616402 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-22 19:42:55.616416 | orchestrator | 2025-06-22 19:42:55.616434 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-22 19:42:55.616451 | orchestrator | Sunday 22 June 2025 19:42:45 +0000 (0:00:00.831) 0:00:11.219 *********** 2025-06-22 19:42:55.616469 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:55.616486 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:42:55.616528 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:42:55.616541 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:42:55.616551 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:42:55.616562 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:42:55.616572 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:42:55.616582 | orchestrator | 2025-06-22 19:42:55.616593 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-22 19:42:55.616603 | orchestrator | Sunday 22 June 2025 19:42:47 +0000 (0:00:01.642) 0:00:12.861 *********** 2025-06-22 19:42:55.616614 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:55.616624 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:42:55.616635 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:42:55.616645 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:42:55.616666 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:42:55.616676 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:42:55.616708 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:42:55.616719 | orchestrator | 2025-06-22 19:42:55.616730 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-22 19:42:55.616740 | orchestrator | Sunday 22 June 2025 19:42:48 +0000 (0:00:01.608) 0:00:14.470 *********** 2025-06-22 19:42:55.616751 | orchestrator | ok: [testbed-manager] 2025-06-22 19:42:55.616761 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:42:55.616772 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:42:55.616782 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:42:55.616792 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:42:55.616803 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:42:55.616813 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:42:55.616823 | orchestrator | 2025-06-22 19:42:55.616834 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-22 19:42:55.616845 | orchestrator | Sunday 22 June 2025 19:42:50 +0000 (0:00:01.486) 0:00:15.956 *********** 2025-06-22 19:42:55.616855 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:55.616865 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:42:55.616876 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:42:55.616886 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:42:55.616897 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:42:55.616907 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:42:55.616918 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:42:55.616928 | orchestrator | 2025-06-22 19:42:55.616939 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-22 19:42:55.616949 | orchestrator | Sunday 22 June 2025 19:42:52 +0000 (0:00:01.717) 0:00:17.674 *********** 2025-06-22 19:42:55.616960 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:42:55.616970 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:42:55.616980 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:42:55.616998 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:42:55.617017 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:42:55.617035 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:42:55.617053 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:42:55.617071 | orchestrator | 2025-06-22 19:42:55.617089 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-22 19:42:55.617107 | orchestrator | 2025-06-22 19:42:55.617125 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-22 19:42:55.617166 | orchestrator | Sunday 22 June 2025 19:42:52 +0000 (0:00:00.588) 0:00:18.263 *********** 2025-06-22 19:42:55.617196 | orchestrator | ok: [testbed-manager] 2025-06-22 19:42:55.617207 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:42:55.617217 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:42:55.617227 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:42:55.617238 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:42:55.617248 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:42:55.617259 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:42:55.617269 | orchestrator | 2025-06-22 19:42:55.617280 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:42:55.617291 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:55.617304 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:42:55.617314 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:42:55.617325 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:42:55.617336 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:42:55.617356 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:42:55.617366 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:42:55.617377 | orchestrator | 2025-06-22 19:42:55.617388 | orchestrator | 2025-06-22 19:42:55.617398 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:42:55.617409 | orchestrator | Sunday 22 June 2025 19:42:55 +0000 (0:00:02.886) 0:00:21.149 *********** 2025-06-22 19:42:55.617419 | orchestrator | =============================================================================== 2025-06-22 19:42:55.617430 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.80s 2025-06-22 19:42:55.617440 | orchestrator | Install python3-docker -------------------------------------------------- 2.89s 2025-06-22 19:42:55.617451 | orchestrator | Apply netplan configuration --------------------------------------------- 2.28s 2025-06-22 19:42:55.617461 | orchestrator | Apply netplan configuration --------------------------------------------- 1.90s 2025-06-22 19:42:55.617472 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.72s 2025-06-22 19:42:55.617482 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.64s 2025-06-22 19:42:55.617499 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.61s 2025-06-22 19:42:55.617561 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2025-06-22 19:42:55.617580 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.47s 2025-06-22 19:42:55.617591 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.83s 2025-06-22 19:42:55.617602 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.79s 2025-06-22 19:42:55.617623 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.59s 2025-06-22 19:42:56.138748 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-22 19:42:57.863550 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:42:57.863640 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:42:57.863651 | orchestrator | Registering Redlock._release_script 2025-06-22 19:42:57.919170 | orchestrator | 2025-06-22 19:42:57 | INFO  | Task 7bc2dba7-45a5-4a6c-aa13-d5ea81315b7a (reboot) was prepared for execution. 2025-06-22 19:42:57.919250 | orchestrator | 2025-06-22 19:42:57 | INFO  | It takes a moment until task 7bc2dba7-45a5-4a6c-aa13-d5ea81315b7a (reboot) has been started and output is visible here. 2025-06-22 19:43:07.141448 | orchestrator | 2025-06-22 19:43:07.141622 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:43:07.141642 | orchestrator | 2025-06-22 19:43:07.141653 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:43:07.141665 | orchestrator | Sunday 22 June 2025 19:43:01 +0000 (0:00:00.157) 0:00:00.157 *********** 2025-06-22 19:43:07.141676 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:43:07.141688 | orchestrator | 2025-06-22 19:43:07.141699 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:43:07.141710 | orchestrator | Sunday 22 June 2025 19:43:01 +0000 (0:00:00.079) 0:00:00.236 *********** 2025-06-22 19:43:07.141721 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:43:07.141732 | orchestrator | 2025-06-22 19:43:07.141743 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:43:07.141754 | orchestrator | Sunday 22 June 2025 19:43:02 +0000 (0:00:00.904) 0:00:01.141 *********** 2025-06-22 19:43:07.141765 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:43:07.141775 | orchestrator | 2025-06-22 19:43:07.141786 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:43:07.141818 | orchestrator | 2025-06-22 19:43:07.141829 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:43:07.141840 | orchestrator | Sunday 22 June 2025 19:43:02 +0000 (0:00:00.085) 0:00:01.227 *********** 2025-06-22 19:43:07.141850 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:43:07.141861 | orchestrator | 2025-06-22 19:43:07.141872 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:43:07.141883 | orchestrator | Sunday 22 June 2025 19:43:02 +0000 (0:00:00.084) 0:00:01.311 *********** 2025-06-22 19:43:07.141893 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:43:07.141904 | orchestrator | 2025-06-22 19:43:07.141916 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:43:07.141945 | orchestrator | Sunday 22 June 2025 19:43:03 +0000 (0:00:00.678) 0:00:01.989 *********** 2025-06-22 19:43:07.141956 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:43:07.141967 | orchestrator | 2025-06-22 19:43:07.141978 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:43:07.141990 | orchestrator | 2025-06-22 19:43:07.142003 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:43:07.142015 | orchestrator | Sunday 22 June 2025 19:43:03 +0000 (0:00:00.097) 0:00:02.087 *********** 2025-06-22 19:43:07.142211 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:43:07.142231 | orchestrator | 2025-06-22 19:43:07.142248 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:43:07.142267 | orchestrator | Sunday 22 June 2025 19:43:03 +0000 (0:00:00.162) 0:00:02.249 *********** 2025-06-22 19:43:07.142285 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:43:07.142305 | orchestrator | 2025-06-22 19:43:07.142323 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:43:07.142340 | orchestrator | Sunday 22 June 2025 19:43:04 +0000 (0:00:00.638) 0:00:02.888 *********** 2025-06-22 19:43:07.142353 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:43:07.142365 | orchestrator | 2025-06-22 19:43:07.142376 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:43:07.142386 | orchestrator | 2025-06-22 19:43:07.142396 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:43:07.142407 | orchestrator | Sunday 22 June 2025 19:43:04 +0000 (0:00:00.111) 0:00:03.000 *********** 2025-06-22 19:43:07.142418 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:43:07.142428 | orchestrator | 2025-06-22 19:43:07.142439 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:43:07.142449 | orchestrator | Sunday 22 June 2025 19:43:04 +0000 (0:00:00.088) 0:00:03.089 *********** 2025-06-22 19:43:07.142460 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:43:07.142470 | orchestrator | 2025-06-22 19:43:07.142481 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:43:07.142491 | orchestrator | Sunday 22 June 2025 19:43:05 +0000 (0:00:00.681) 0:00:03.770 *********** 2025-06-22 19:43:07.142502 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:43:07.142512 | orchestrator | 2025-06-22 19:43:07.142523 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:43:07.142555 | orchestrator | 2025-06-22 19:43:07.142566 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:43:07.142577 | orchestrator | Sunday 22 June 2025 19:43:05 +0000 (0:00:00.106) 0:00:03.877 *********** 2025-06-22 19:43:07.142588 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:43:07.142598 | orchestrator | 2025-06-22 19:43:07.142609 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:43:07.142619 | orchestrator | Sunday 22 June 2025 19:43:05 +0000 (0:00:00.102) 0:00:03.980 *********** 2025-06-22 19:43:07.142629 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:43:07.142640 | orchestrator | 2025-06-22 19:43:07.142650 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:43:07.142661 | orchestrator | Sunday 22 June 2025 19:43:05 +0000 (0:00:00.639) 0:00:04.619 *********** 2025-06-22 19:43:07.142685 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:43:07.142695 | orchestrator | 2025-06-22 19:43:07.142706 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:43:07.142717 | orchestrator | 2025-06-22 19:43:07.142727 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:43:07.142738 | orchestrator | Sunday 22 June 2025 19:43:06 +0000 (0:00:00.093) 0:00:04.713 *********** 2025-06-22 19:43:07.142748 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:43:07.142764 | orchestrator | 2025-06-22 19:43:07.142775 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:43:07.142786 | orchestrator | Sunday 22 June 2025 19:43:06 +0000 (0:00:00.098) 0:00:04.811 *********** 2025-06-22 19:43:07.142797 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:43:07.142807 | orchestrator | 2025-06-22 19:43:07.142818 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:43:07.142829 | orchestrator | Sunday 22 June 2025 19:43:06 +0000 (0:00:00.637) 0:00:05.449 *********** 2025-06-22 19:43:07.142859 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:43:07.142871 | orchestrator | 2025-06-22 19:43:07.142881 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:43:07.142893 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:43:07.142906 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:43:07.142916 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:43:07.142984 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:43:07.142997 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:43:07.143007 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:43:07.143018 | orchestrator | 2025-06-22 19:43:07.143029 | orchestrator | 2025-06-22 19:43:07.143040 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:43:07.143051 | orchestrator | Sunday 22 June 2025 19:43:06 +0000 (0:00:00.037) 0:00:05.486 *********** 2025-06-22 19:43:07.143062 | orchestrator | =============================================================================== 2025-06-22 19:43:07.143073 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.18s 2025-06-22 19:43:07.143083 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.62s 2025-06-22 19:43:07.143094 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.53s 2025-06-22 19:43:07.396082 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-22 19:43:09.151391 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:43:09.151489 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:43:09.151502 | orchestrator | Registering Redlock._release_script 2025-06-22 19:43:09.205563 | orchestrator | 2025-06-22 19:43:09 | INFO  | Task d244c055-6dd0-4ec3-8c12-a05036e27560 (wait-for-connection) was prepared for execution. 2025-06-22 19:43:09.205647 | orchestrator | 2025-06-22 19:43:09 | INFO  | It takes a moment until task d244c055-6dd0-4ec3-8c12-a05036e27560 (wait-for-connection) has been started and output is visible here. 2025-06-22 19:43:25.746262 | orchestrator | 2025-06-22 19:43:25.746379 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-22 19:43:25.746396 | orchestrator | 2025-06-22 19:43:25.746409 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-22 19:43:25.746447 | orchestrator | Sunday 22 June 2025 19:43:13 +0000 (0:00:00.199) 0:00:00.199 *********** 2025-06-22 19:43:25.746459 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:43:25.746536 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:43:25.746548 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:43:25.746558 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:43:25.746624 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:43:25.746637 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:43:25.746648 | orchestrator | 2025-06-22 19:43:25.746659 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:43:25.746670 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:43:25.746683 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:43:25.746693 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:43:25.746704 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:43:25.746715 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:43:25.746725 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:43:25.746736 | orchestrator | 2025-06-22 19:43:25.746747 | orchestrator | 2025-06-22 19:43:25.746757 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:43:25.746768 | orchestrator | Sunday 22 June 2025 19:43:25 +0000 (0:00:12.424) 0:00:12.624 *********** 2025-06-22 19:43:25.746779 | orchestrator | =============================================================================== 2025-06-22 19:43:25.746790 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.43s 2025-06-22 19:43:26.152315 | orchestrator | + osism apply hddtemp 2025-06-22 19:43:27.892807 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:43:27.892906 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:43:27.892921 | orchestrator | Registering Redlock._release_script 2025-06-22 19:43:27.948473 | orchestrator | 2025-06-22 19:43:27 | INFO  | Task bd7184a2-73a0-4989-86e0-28534631aaf4 (hddtemp) was prepared for execution. 2025-06-22 19:43:27.948550 | orchestrator | 2025-06-22 19:43:27 | INFO  | It takes a moment until task bd7184a2-73a0-4989-86e0-28534631aaf4 (hddtemp) has been started and output is visible here. 2025-06-22 19:43:55.205749 | orchestrator | 2025-06-22 19:43:55.205896 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-22 19:43:55.205993 | orchestrator | 2025-06-22 19:43:55.206007 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-22 19:43:55.206070 | orchestrator | Sunday 22 June 2025 19:43:31 +0000 (0:00:00.230) 0:00:00.230 *********** 2025-06-22 19:43:55.206083 | orchestrator | ok: [testbed-manager] 2025-06-22 19:43:55.206096 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:43:55.206106 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:43:55.206117 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:43:55.206132 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:43:55.206151 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:43:55.206163 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:43:55.206174 | orchestrator | 2025-06-22 19:43:55.206203 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-22 19:43:55.206214 | orchestrator | Sunday 22 June 2025 19:43:32 +0000 (0:00:00.578) 0:00:00.808 *********** 2025-06-22 19:43:55.206227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:43:55.206265 | orchestrator | 2025-06-22 19:43:55.206279 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-22 19:43:55.206291 | orchestrator | Sunday 22 June 2025 19:43:33 +0000 (0:00:01.033) 0:00:01.842 *********** 2025-06-22 19:43:55.206303 | orchestrator | ok: [testbed-manager] 2025-06-22 19:43:55.206315 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:43:55.206327 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:43:55.206338 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:43:55.206349 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:43:55.206361 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:43:55.206373 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:43:55.206384 | orchestrator | 2025-06-22 19:43:55.206396 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-22 19:43:55.206408 | orchestrator | Sunday 22 June 2025 19:43:35 +0000 (0:00:01.869) 0:00:03.712 *********** 2025-06-22 19:43:55.206420 | orchestrator | changed: [testbed-manager] 2025-06-22 19:43:55.206433 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:43:55.206444 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:43:55.206456 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:43:55.206468 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:43:55.206480 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:43:55.206491 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:43:55.206503 | orchestrator | 2025-06-22 19:43:55.206516 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-22 19:43:55.206527 | orchestrator | Sunday 22 June 2025 19:43:36 +0000 (0:00:01.177) 0:00:04.890 *********** 2025-06-22 19:43:55.206539 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:43:55.206551 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:43:55.206563 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:43:55.206574 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:43:55.206586 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:43:55.206598 | orchestrator | ok: [testbed-manager] 2025-06-22 19:43:55.206610 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:43:55.206622 | orchestrator | 2025-06-22 19:43:55.206672 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-22 19:43:55.206683 | orchestrator | Sunday 22 June 2025 19:43:37 +0000 (0:00:01.224) 0:00:06.114 *********** 2025-06-22 19:43:55.206693 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:43:55.206704 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:43:55.206714 | orchestrator | changed: [testbed-manager] 2025-06-22 19:43:55.206725 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:43:55.206735 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:43:55.206746 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:43:55.206756 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:43:55.206767 | orchestrator | 2025-06-22 19:43:55.206778 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-22 19:43:55.206789 | orchestrator | Sunday 22 June 2025 19:43:38 +0000 (0:00:00.812) 0:00:06.927 *********** 2025-06-22 19:43:55.206800 | orchestrator | changed: [testbed-manager] 2025-06-22 19:43:55.206810 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:43:55.206821 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:43:55.206831 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:43:55.206842 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:43:55.206852 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:43:55.206863 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:43:55.206873 | orchestrator | 2025-06-22 19:43:55.206884 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-22 19:43:55.206895 | orchestrator | Sunday 22 June 2025 19:43:51 +0000 (0:00:12.852) 0:00:19.779 *********** 2025-06-22 19:43:55.206906 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:43:55.206928 | orchestrator | 2025-06-22 19:43:55.206939 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-22 19:43:55.206950 | orchestrator | Sunday 22 June 2025 19:43:52 +0000 (0:00:01.373) 0:00:21.153 *********** 2025-06-22 19:43:55.206960 | orchestrator | changed: [testbed-manager] 2025-06-22 19:43:55.206971 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:43:55.206982 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:43:55.206992 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:43:55.207003 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:43:55.207013 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:43:55.207023 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:43:55.207034 | orchestrator | 2025-06-22 19:43:55.207045 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:43:55.207056 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:43:55.207088 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:43:55.207099 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:43:55.207110 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:43:55.207127 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:43:55.207138 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:43:55.207149 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:43:55.207160 | orchestrator | 2025-06-22 19:43:55.207171 | orchestrator | 2025-06-22 19:43:55.207182 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:43:55.207192 | orchestrator | Sunday 22 June 2025 19:43:54 +0000 (0:00:01.905) 0:00:23.058 *********** 2025-06-22 19:43:55.207203 | orchestrator | =============================================================================== 2025-06-22 19:43:55.207214 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.85s 2025-06-22 19:43:55.207225 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.91s 2025-06-22 19:43:55.207235 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.87s 2025-06-22 19:43:55.207246 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.37s 2025-06-22 19:43:55.207256 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.22s 2025-06-22 19:43:55.207267 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.18s 2025-06-22 19:43:55.207278 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.03s 2025-06-22 19:43:55.207288 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.81s 2025-06-22 19:43:55.207299 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.58s 2025-06-22 19:43:55.471096 | orchestrator | ++ semver latest 7.1.1 2025-06-22 19:43:55.529942 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-22 19:43:55.530084 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-22 19:43:55.530100 | orchestrator | + sudo systemctl restart manager.service 2025-06-22 19:44:08.906742 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-22 19:44:08.906855 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-22 19:44:08.906871 | orchestrator | + local max_attempts=60 2025-06-22 19:44:08.906884 | orchestrator | + local name=ceph-ansible 2025-06-22 19:44:08.906895 | orchestrator | + local attempt_num=1 2025-06-22 19:44:08.906930 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:44:08.940897 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:44:08.940967 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:44:08.940980 | orchestrator | + sleep 5 2025-06-22 19:44:13.945433 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:44:13.985325 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:44:13.985431 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:44:13.985457 | orchestrator | + sleep 5 2025-06-22 19:44:18.991967 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:44:19.025147 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:44:19.025209 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:44:19.025213 | orchestrator | + sleep 5 2025-06-22 19:44:24.031318 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:44:24.072427 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:44:24.072519 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:44:24.072533 | orchestrator | + sleep 5 2025-06-22 19:44:29.075776 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:44:29.108180 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:44:29.108291 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:44:29.108307 | orchestrator | + sleep 5 2025-06-22 19:44:34.113125 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:44:34.149015 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:44:34.149094 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:44:34.149108 | orchestrator | + sleep 5 2025-06-22 19:44:39.152964 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:44:39.188576 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:44:39.188628 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:44:39.188642 | orchestrator | + sleep 5 2025-06-22 19:44:44.194800 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:44:44.228128 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:44:44.228215 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:44:44.228228 | orchestrator | + sleep 5 2025-06-22 19:44:49.229814 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:44:49.260019 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:44:49.260092 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:44:49.260105 | orchestrator | + sleep 5 2025-06-22 19:44:54.263718 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:44:54.300975 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:44:54.301102 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:44:54.301115 | orchestrator | + sleep 5 2025-06-22 19:44:59.305168 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:44:59.341281 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:44:59.341369 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:44:59.341381 | orchestrator | + sleep 5 2025-06-22 19:45:04.345304 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:45:04.379465 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:45:04.379557 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:45:04.379571 | orchestrator | + sleep 5 2025-06-22 19:45:09.383543 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:45:09.427493 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:45:09.427583 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:45:09.427597 | orchestrator | + sleep 5 2025-06-22 19:45:14.433558 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:45:14.470721 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:45:14.470898 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-22 19:45:14.470916 | orchestrator | + local max_attempts=60 2025-06-22 19:45:14.470929 | orchestrator | + local name=kolla-ansible 2025-06-22 19:45:14.470940 | orchestrator | + local attempt_num=1 2025-06-22 19:45:14.471090 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-22 19:45:14.509367 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:45:14.509495 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-22 19:45:14.509512 | orchestrator | + local max_attempts=60 2025-06-22 19:45:14.509524 | orchestrator | + local name=osism-ansible 2025-06-22 19:45:14.509550 | orchestrator | + local attempt_num=1 2025-06-22 19:45:14.510531 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-22 19:45:14.550730 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:45:14.550820 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-22 19:45:14.550833 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-22 19:45:14.722543 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-22 19:45:14.867442 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-22 19:45:15.023255 | orchestrator | ARA in osism-ansible already disabled. 2025-06-22 19:45:15.168937 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-22 19:45:15.169442 | orchestrator | + osism apply gather-facts 2025-06-22 19:45:16.994664 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:45:16.994755 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:45:16.994833 | orchestrator | Registering Redlock._release_script 2025-06-22 19:45:17.054697 | orchestrator | 2025-06-22 19:45:17 | INFO  | Task d407d147-c688-4a66-81ff-2f4453eac376 (gather-facts) was prepared for execution. 2025-06-22 19:45:17.054780 | orchestrator | 2025-06-22 19:45:17 | INFO  | It takes a moment until task d407d147-c688-4a66-81ff-2f4453eac376 (gather-facts) has been started and output is visible here. 2025-06-22 19:45:27.430169 | orchestrator | 2025-06-22 19:45:27.430269 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:45:27.430285 | orchestrator | 2025-06-22 19:45:27.430297 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:45:27.430308 | orchestrator | Sunday 22 June 2025 19:45:21 +0000 (0:00:00.226) 0:00:00.226 *********** 2025-06-22 19:45:27.430319 | orchestrator | ok: [testbed-manager] 2025-06-22 19:45:27.430331 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:45:27.430342 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:45:27.430353 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:45:27.430364 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:45:27.430375 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:45:27.430385 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:45:27.430396 | orchestrator | 2025-06-22 19:45:27.430407 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 19:45:27.430418 | orchestrator | 2025-06-22 19:45:27.430429 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 19:45:27.430441 | orchestrator | Sunday 22 June 2025 19:45:26 +0000 (0:00:05.703) 0:00:05.930 *********** 2025-06-22 19:45:27.430452 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:45:27.430463 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:45:27.430474 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:45:27.430485 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:45:27.430496 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:45:27.430507 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:45:27.430517 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:45:27.430528 | orchestrator | 2025-06-22 19:45:27.430539 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:45:27.430550 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:45:27.430563 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:45:27.430573 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:45:27.430584 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:45:27.430595 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:45:27.430677 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:45:27.430692 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:45:27.430703 | orchestrator | 2025-06-22 19:45:27.430714 | orchestrator | 2025-06-22 19:45:27.430725 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:45:27.430738 | orchestrator | Sunday 22 June 2025 19:45:27 +0000 (0:00:00.455) 0:00:06.385 *********** 2025-06-22 19:45:27.430750 | orchestrator | =============================================================================== 2025-06-22 19:45:27.430762 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.70s 2025-06-22 19:45:27.430774 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2025-06-22 19:45:27.604129 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-22 19:45:27.617224 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-22 19:45:27.628977 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-22 19:45:27.640357 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-22 19:45:27.658134 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-22 19:45:27.671144 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-22 19:45:27.682627 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-22 19:45:27.692988 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-22 19:45:27.700699 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-22 19:45:27.708568 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-22 19:45:27.719749 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-22 19:45:27.735227 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-22 19:45:27.747593 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-22 19:45:27.758214 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-22 19:45:27.767670 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-22 19:45:27.776925 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-22 19:45:27.786331 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-22 19:45:27.795635 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-22 19:45:27.805442 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-22 19:45:27.815024 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-22 19:45:27.824561 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-22 19:45:28.004725 | orchestrator | ok: Runtime: 0:19:55.955457 2025-06-22 19:45:28.128180 | 2025-06-22 19:45:28.128364 | TASK [Deploy services] 2025-06-22 19:45:28.664733 | orchestrator | skipping: Conditional result was False 2025-06-22 19:45:28.678767 | 2025-06-22 19:45:28.679004 | TASK [Deploy in a nutshell] 2025-06-22 19:45:29.356626 | orchestrator | 2025-06-22 19:45:29.356710 | orchestrator | # PULL IMAGES 2025-06-22 19:45:29.356719 | orchestrator | 2025-06-22 19:45:29.356730 | orchestrator | + set -e 2025-06-22 19:45:29.356737 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 19:45:29.356745 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 19:45:29.356750 | orchestrator | ++ INTERACTIVE=false 2025-06-22 19:45:29.356768 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 19:45:29.356777 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 19:45:29.356782 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 19:45:29.356786 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 19:45:29.356793 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 19:45:29.356797 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 19:45:29.356803 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 19:45:29.356807 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 19:45:29.356841 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 19:45:29.356848 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-22 19:45:29.356855 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-22 19:45:29.356865 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 19:45:29.356869 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 19:45:29.356873 | orchestrator | ++ export ARA=false 2025-06-22 19:45:29.356877 | orchestrator | ++ ARA=false 2025-06-22 19:45:29.356881 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 19:45:29.356884 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 19:45:29.356888 | orchestrator | ++ export TEMPEST=false 2025-06-22 19:45:29.356892 | orchestrator | ++ TEMPEST=false 2025-06-22 19:45:29.356896 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 19:45:29.356899 | orchestrator | ++ IS_ZUUL=true 2025-06-22 19:45:29.356903 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.19 2025-06-22 19:45:29.356907 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.19 2025-06-22 19:45:29.356911 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 19:45:29.356914 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 19:45:29.356918 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 19:45:29.356922 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 19:45:29.356926 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 19:45:29.356929 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 19:45:29.356933 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 19:45:29.356940 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 19:45:29.356944 | orchestrator | + echo 2025-06-22 19:45:29.356948 | orchestrator | + echo '# PULL IMAGES' 2025-06-22 19:45:29.356952 | orchestrator | + echo 2025-06-22 19:45:29.356958 | orchestrator | ++ semver latest 7.0.0 2025-06-22 19:45:29.403083 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-22 19:45:29.403119 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-22 19:45:29.403125 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-22 19:45:30.943020 | orchestrator | 2025-06-22 19:45:30 | INFO  | Trying to run play pull-images in environment custom 2025-06-22 19:45:30.948008 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:45:30.948034 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:45:30.948040 | orchestrator | Registering Redlock._release_script 2025-06-22 19:45:30.998558 | orchestrator | 2025-06-22 19:45:30 | INFO  | Task ece197b0-fd4f-4024-8b19-6bf5213d63d7 (pull-images) was prepared for execution. 2025-06-22 19:45:30.998606 | orchestrator | 2025-06-22 19:45:30 | INFO  | It takes a moment until task ece197b0-fd4f-4024-8b19-6bf5213d63d7 (pull-images) has been started and output is visible here. 2025-06-22 19:47:36.215259 | orchestrator | 2025-06-22 19:47:36.215384 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-22 19:47:36.215404 | orchestrator | 2025-06-22 19:47:36.215417 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-22 19:47:36.215440 | orchestrator | Sunday 22 June 2025 19:45:34 +0000 (0:00:00.123) 0:00:00.123 *********** 2025-06-22 19:47:36.215451 | orchestrator | changed: [testbed-manager] 2025-06-22 19:47:36.215463 | orchestrator | 2025-06-22 19:47:36.215475 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-22 19:47:36.215486 | orchestrator | Sunday 22 June 2025 19:46:44 +0000 (0:01:09.724) 0:01:09.847 *********** 2025-06-22 19:47:36.215498 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-22 19:47:36.215513 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-22 19:47:36.215524 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-22 19:47:36.215568 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-22 19:47:36.215586 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-22 19:47:36.215597 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-22 19:47:36.215608 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-22 19:47:36.215619 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-22 19:47:36.215629 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-22 19:47:36.215640 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-22 19:47:36.215651 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-22 19:47:36.215662 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-22 19:47:36.215672 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-22 19:47:36.215683 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-22 19:47:36.215693 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-22 19:47:36.215704 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-22 19:47:36.215715 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-22 19:47:36.215725 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-22 19:47:36.215736 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-22 19:47:36.215746 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-22 19:47:36.215757 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-22 19:47:36.215767 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-22 19:47:36.215778 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-22 19:47:36.215788 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-22 19:47:36.215799 | orchestrator | 2025-06-22 19:47:36.215810 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:47:36.215821 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:47:36.215834 | orchestrator | 2025-06-22 19:47:36.215844 | orchestrator | 2025-06-22 19:47:36.215855 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:47:36.215866 | orchestrator | Sunday 22 June 2025 19:47:35 +0000 (0:00:51.740) 0:02:01.588 *********** 2025-06-22 19:47:36.215878 | orchestrator | =============================================================================== 2025-06-22 19:47:36.215889 | orchestrator | Pull keystone image ---------------------------------------------------- 69.72s 2025-06-22 19:47:36.215899 | orchestrator | Pull other images ------------------------------------------------------ 51.74s 2025-06-22 19:47:38.389150 | orchestrator | 2025-06-22 19:47:38 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-22 19:47:38.393779 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:47:38.393816 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:47:38.394147 | orchestrator | Registering Redlock._release_script 2025-06-22 19:47:38.457737 | orchestrator | 2025-06-22 19:47:38 | INFO  | Task b8b39d30-dac5-423a-86e3-492bcd128378 (wipe-partitions) was prepared for execution. 2025-06-22 19:47:38.457846 | orchestrator | 2025-06-22 19:47:38 | INFO  | It takes a moment until task b8b39d30-dac5-423a-86e3-492bcd128378 (wipe-partitions) has been started and output is visible here. 2025-06-22 19:47:51.284879 | orchestrator | 2025-06-22 19:47:51.284982 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-22 19:47:51.285000 | orchestrator | 2025-06-22 19:47:51.285016 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-22 19:47:51.285036 | orchestrator | Sunday 22 June 2025 19:47:42 +0000 (0:00:00.127) 0:00:00.127 *********** 2025-06-22 19:47:51.285082 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:47:51.285094 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:47:51.285105 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:47:51.285123 | orchestrator | 2025-06-22 19:47:51.285154 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-22 19:47:51.285197 | orchestrator | Sunday 22 June 2025 19:47:43 +0000 (0:00:00.538) 0:00:00.666 *********** 2025-06-22 19:47:51.285209 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:51.285220 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:51.285231 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:51.285241 | orchestrator | 2025-06-22 19:47:51.285252 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-22 19:47:51.285263 | orchestrator | Sunday 22 June 2025 19:47:43 +0000 (0:00:00.236) 0:00:00.903 *********** 2025-06-22 19:47:51.285274 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:47:51.285285 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:47:51.285296 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:51.285307 | orchestrator | 2025-06-22 19:47:51.285317 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-22 19:47:51.285328 | orchestrator | Sunday 22 June 2025 19:47:43 +0000 (0:00:00.687) 0:00:01.591 *********** 2025-06-22 19:47:51.285339 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:51.285350 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:51.285361 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:51.285371 | orchestrator | 2025-06-22 19:47:51.285386 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-22 19:47:51.285397 | orchestrator | Sunday 22 June 2025 19:47:44 +0000 (0:00:00.235) 0:00:01.826 *********** 2025-06-22 19:47:51.285408 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-22 19:47:51.285419 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-22 19:47:51.285448 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-22 19:47:51.285459 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-22 19:47:51.285481 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-22 19:47:51.285492 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-22 19:47:51.285502 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-22 19:47:51.285523 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-22 19:47:51.285535 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-22 19:47:51.285545 | orchestrator | 2025-06-22 19:47:51.285556 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-22 19:47:51.285567 | orchestrator | Sunday 22 June 2025 19:47:45 +0000 (0:00:01.152) 0:00:02.979 *********** 2025-06-22 19:47:51.285578 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-22 19:47:51.285602 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-22 19:47:51.285613 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-22 19:47:51.285623 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-22 19:47:51.285634 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-22 19:47:51.285645 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-22 19:47:51.285655 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-22 19:47:51.285666 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-22 19:47:51.285676 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-22 19:47:51.285687 | orchestrator | 2025-06-22 19:47:51.285698 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-22 19:47:51.285709 | orchestrator | Sunday 22 June 2025 19:47:46 +0000 (0:00:01.315) 0:00:04.294 *********** 2025-06-22 19:47:51.285720 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-22 19:47:51.285730 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-22 19:47:51.285741 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-22 19:47:51.285751 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-22 19:47:51.285762 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-22 19:47:51.285772 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-22 19:47:51.285783 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-22 19:47:51.285802 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-22 19:47:51.285812 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-22 19:47:51.285823 | orchestrator | 2025-06-22 19:47:51.285834 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-22 19:47:51.285845 | orchestrator | Sunday 22 June 2025 19:47:49 +0000 (0:00:03.162) 0:00:07.457 *********** 2025-06-22 19:47:51.285856 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:47:51.285866 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:47:51.285877 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:47:51.285887 | orchestrator | 2025-06-22 19:47:51.285901 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-22 19:47:51.285919 | orchestrator | Sunday 22 June 2025 19:47:50 +0000 (0:00:00.580) 0:00:08.038 *********** 2025-06-22 19:47:51.285935 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:47:51.285959 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:47:51.285976 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:47:51.285994 | orchestrator | 2025-06-22 19:47:51.286013 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:47:51.286129 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:47:51.286143 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:47:51.286173 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:47:51.286185 | orchestrator | 2025-06-22 19:47:51.286196 | orchestrator | 2025-06-22 19:47:51.286207 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:47:51.286218 | orchestrator | Sunday 22 June 2025 19:47:51 +0000 (0:00:00.628) 0:00:08.666 *********** 2025-06-22 19:47:51.286228 | orchestrator | =============================================================================== 2025-06-22 19:47:51.286239 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 3.16s 2025-06-22 19:47:51.286249 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.32s 2025-06-22 19:47:51.286260 | orchestrator | Check device availability ----------------------------------------------- 1.15s 2025-06-22 19:47:51.286271 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.69s 2025-06-22 19:47:51.286281 | orchestrator | Request device events from the kernel ----------------------------------- 0.63s 2025-06-22 19:47:51.286292 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2025-06-22 19:47:51.286302 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.54s 2025-06-22 19:47:51.286313 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-06-22 19:47:51.286323 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2025-06-22 19:47:53.078626 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:47:53.078767 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:47:53.078794 | orchestrator | Registering Redlock._release_script 2025-06-22 19:47:53.135483 | orchestrator | 2025-06-22 19:47:53 | INFO  | Task a09966da-e26f-4082-bf14-83110edb98bf (facts) was prepared for execution. 2025-06-22 19:47:53.135565 | orchestrator | 2025-06-22 19:47:53 | INFO  | It takes a moment until task a09966da-e26f-4082-bf14-83110edb98bf (facts) has been started and output is visible here. 2025-06-22 19:48:04.963635 | orchestrator | 2025-06-22 19:48:04.963731 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-22 19:48:04.963748 | orchestrator | 2025-06-22 19:48:04.963761 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 19:48:04.963773 | orchestrator | Sunday 22 June 2025 19:47:56 +0000 (0:00:00.259) 0:00:00.259 *********** 2025-06-22 19:48:04.963810 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:04.963823 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:48:04.963834 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:48:04.963845 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:48:04.963856 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:04.963867 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:04.963877 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:48:04.963888 | orchestrator | 2025-06-22 19:48:04.963899 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 19:48:04.963910 | orchestrator | Sunday 22 June 2025 19:47:57 +0000 (0:00:01.094) 0:00:01.353 *********** 2025-06-22 19:48:04.963922 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:48:04.963933 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:48:04.963944 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:48:04.963955 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:48:04.963966 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:04.963977 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:04.963987 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:04.963998 | orchestrator | 2025-06-22 19:48:04.964009 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:48:04.964020 | orchestrator | 2025-06-22 19:48:04.964031 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:48:04.964042 | orchestrator | Sunday 22 June 2025 19:47:59 +0000 (0:00:01.389) 0:00:02.743 *********** 2025-06-22 19:48:04.964053 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:48:04.964101 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:48:04.964113 | orchestrator | ok: [testbed-manager] 2025-06-22 19:48:04.964124 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:48:04.964135 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:04.964146 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:04.964157 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:48:04.964168 | orchestrator | 2025-06-22 19:48:04.964179 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 19:48:04.964190 | orchestrator | 2025-06-22 19:48:04.964203 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 19:48:04.964231 | orchestrator | Sunday 22 June 2025 19:48:04 +0000 (0:00:04.799) 0:00:07.543 *********** 2025-06-22 19:48:04.964244 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:48:04.964256 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:48:04.964269 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:48:04.964281 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:48:04.964294 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:04.964305 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:04.964317 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:04.964329 | orchestrator | 2025-06-22 19:48:04.964342 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:48:04.964355 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:48:04.964367 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:48:04.964380 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:48:04.964392 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:48:04.964406 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:48:04.964421 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:48:04.964440 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:48:04.964471 | orchestrator | 2025-06-22 19:48:04.964495 | orchestrator | 2025-06-22 19:48:04.964512 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:48:04.964532 | orchestrator | Sunday 22 June 2025 19:48:04 +0000 (0:00:00.489) 0:00:08.032 *********** 2025-06-22 19:48:04.964552 | orchestrator | =============================================================================== 2025-06-22 19:48:04.964570 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.80s 2025-06-22 19:48:04.964587 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.39s 2025-06-22 19:48:04.964607 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.09s 2025-06-22 19:48:04.964629 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.49s 2025-06-22 19:48:06.520783 | orchestrator | 2025-06-22 19:48:06 | INFO  | Task a9ea8479-925a-4a42-b7d2-b5e1717ba531 (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-22 19:48:06.520873 | orchestrator | 2025-06-22 19:48:06 | INFO  | It takes a moment until task a9ea8479-925a-4a42-b7d2-b5e1717ba531 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-22 19:48:17.246840 | orchestrator | 2025-06-22 19:48:17.246936 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-22 19:48:17.246953 | orchestrator | 2025-06-22 19:48:17.246965 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:48:17.246997 | orchestrator | Sunday 22 June 2025 19:48:10 +0000 (0:00:00.300) 0:00:00.300 *********** 2025-06-22 19:48:17.247009 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 19:48:17.247020 | orchestrator | 2025-06-22 19:48:17.247031 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:48:17.247042 | orchestrator | Sunday 22 June 2025 19:48:10 +0000 (0:00:00.242) 0:00:00.542 *********** 2025-06-22 19:48:17.247053 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:17.247163 | orchestrator | 2025-06-22 19:48:17.247212 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.247225 | orchestrator | Sunday 22 June 2025 19:48:10 +0000 (0:00:00.235) 0:00:00.778 *********** 2025-06-22 19:48:17.247236 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-22 19:48:17.247247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-22 19:48:17.247260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-22 19:48:17.247270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-22 19:48:17.247281 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-22 19:48:17.247292 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-22 19:48:17.247303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-22 19:48:17.247313 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-22 19:48:17.247324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-22 19:48:17.247335 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-22 19:48:17.247345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-22 19:48:17.247356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-22 19:48:17.247367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-22 19:48:17.247379 | orchestrator | 2025-06-22 19:48:17.247392 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.247435 | orchestrator | Sunday 22 June 2025 19:48:11 +0000 (0:00:00.362) 0:00:01.141 *********** 2025-06-22 19:48:17.247448 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.247460 | orchestrator | 2025-06-22 19:48:17.247472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.247484 | orchestrator | Sunday 22 June 2025 19:48:11 +0000 (0:00:00.383) 0:00:01.525 *********** 2025-06-22 19:48:17.247496 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.247517 | orchestrator | 2025-06-22 19:48:17.247691 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.247705 | orchestrator | Sunday 22 June 2025 19:48:11 +0000 (0:00:00.183) 0:00:01.708 *********** 2025-06-22 19:48:17.247718 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.247730 | orchestrator | 2025-06-22 19:48:17.247741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.247752 | orchestrator | Sunday 22 June 2025 19:48:11 +0000 (0:00:00.187) 0:00:01.896 *********** 2025-06-22 19:48:17.247763 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.247773 | orchestrator | 2025-06-22 19:48:17.248057 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.248072 | orchestrator | Sunday 22 June 2025 19:48:12 +0000 (0:00:00.184) 0:00:02.081 *********** 2025-06-22 19:48:17.248119 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.248139 | orchestrator | 2025-06-22 19:48:17.248163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.248174 | orchestrator | Sunday 22 June 2025 19:48:12 +0000 (0:00:00.188) 0:00:02.270 *********** 2025-06-22 19:48:17.248185 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.248196 | orchestrator | 2025-06-22 19:48:17.248207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.248252 | orchestrator | Sunday 22 June 2025 19:48:12 +0000 (0:00:00.211) 0:00:02.481 *********** 2025-06-22 19:48:17.248265 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.248275 | orchestrator | 2025-06-22 19:48:17.248286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.248297 | orchestrator | Sunday 22 June 2025 19:48:12 +0000 (0:00:00.177) 0:00:02.658 *********** 2025-06-22 19:48:17.248307 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.248318 | orchestrator | 2025-06-22 19:48:17.248471 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.248687 | orchestrator | Sunday 22 June 2025 19:48:12 +0000 (0:00:00.204) 0:00:02.863 *********** 2025-06-22 19:48:17.248699 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3) 2025-06-22 19:48:17.248711 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3) 2025-06-22 19:48:17.248722 | orchestrator | 2025-06-22 19:48:17.248732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.248743 | orchestrator | Sunday 22 June 2025 19:48:13 +0000 (0:00:00.387) 0:00:03.251 *********** 2025-06-22 19:48:17.248774 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d397d31c-b886-4607-b3cb-2d758622dade) 2025-06-22 19:48:17.248785 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d397d31c-b886-4607-b3cb-2d758622dade) 2025-06-22 19:48:17.248796 | orchestrator | 2025-06-22 19:48:17.248807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.248818 | orchestrator | Sunday 22 June 2025 19:48:13 +0000 (0:00:00.379) 0:00:03.630 *********** 2025-06-22 19:48:17.248828 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_66d4c0b6-de40-44d2-a991-376660387b3d) 2025-06-22 19:48:17.248839 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_66d4c0b6-de40-44d2-a991-376660387b3d) 2025-06-22 19:48:17.248850 | orchestrator | 2025-06-22 19:48:17.248861 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.248884 | orchestrator | Sunday 22 June 2025 19:48:14 +0000 (0:00:00.542) 0:00:04.173 *********** 2025-06-22 19:48:17.248912 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f438dec5-52e6-4e07-b468-2b34fd5e0bbc) 2025-06-22 19:48:17.248923 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f438dec5-52e6-4e07-b468-2b34fd5e0bbc) 2025-06-22 19:48:17.248934 | orchestrator | 2025-06-22 19:48:17.248951 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.248963 | orchestrator | Sunday 22 June 2025 19:48:14 +0000 (0:00:00.530) 0:00:04.704 *********** 2025-06-22 19:48:17.248989 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:48:17.249000 | orchestrator | 2025-06-22 19:48:17.249011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:17.249022 | orchestrator | Sunday 22 June 2025 19:48:15 +0000 (0:00:00.593) 0:00:05.298 *********** 2025-06-22 19:48:17.249032 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-22 19:48:17.249043 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-22 19:48:17.249054 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-22 19:48:17.249064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-22 19:48:17.249075 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-22 19:48:17.249129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-22 19:48:17.249140 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-22 19:48:17.249151 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-22 19:48:17.249161 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-22 19:48:17.249172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-22 19:48:17.249182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-22 19:48:17.249193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-22 19:48:17.249203 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-22 19:48:17.249214 | orchestrator | 2025-06-22 19:48:17.249224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:17.249235 | orchestrator | Sunday 22 June 2025 19:48:15 +0000 (0:00:00.339) 0:00:05.637 *********** 2025-06-22 19:48:17.249246 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.249256 | orchestrator | 2025-06-22 19:48:17.249267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:17.249311 | orchestrator | Sunday 22 June 2025 19:48:15 +0000 (0:00:00.191) 0:00:05.829 *********** 2025-06-22 19:48:17.249322 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.249345 | orchestrator | 2025-06-22 19:48:17.249490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:17.249515 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:00.189) 0:00:06.019 *********** 2025-06-22 19:48:17.249526 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.249536 | orchestrator | 2025-06-22 19:48:17.249547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:17.249558 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:00.223) 0:00:06.243 *********** 2025-06-22 19:48:17.249600 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.249614 | orchestrator | 2025-06-22 19:48:17.249625 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:17.249636 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:00.192) 0:00:06.435 *********** 2025-06-22 19:48:17.249708 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.249721 | orchestrator | 2025-06-22 19:48:17.249779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:17.249790 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:00.174) 0:00:06.609 *********** 2025-06-22 19:48:17.249870 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.249882 | orchestrator | 2025-06-22 19:48:17.249893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:17.249904 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:00.186) 0:00:06.795 *********** 2025-06-22 19:48:17.249915 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.249925 | orchestrator | 2025-06-22 19:48:17.249936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:17.249947 | orchestrator | Sunday 22 June 2025 19:48:17 +0000 (0:00:00.211) 0:00:07.006 *********** 2025-06-22 19:48:17.249967 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.895754 | orchestrator | 2025-06-22 19:48:23.895849 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:23.895866 | orchestrator | Sunday 22 June 2025 19:48:17 +0000 (0:00:00.185) 0:00:07.192 *********** 2025-06-22 19:48:23.895878 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-22 19:48:23.895890 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-22 19:48:23.895901 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-22 19:48:23.895912 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-22 19:48:23.895922 | orchestrator | 2025-06-22 19:48:23.895934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:23.895944 | orchestrator | Sunday 22 June 2025 19:48:18 +0000 (0:00:00.855) 0:00:08.048 *********** 2025-06-22 19:48:23.895955 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.895966 | orchestrator | 2025-06-22 19:48:23.895977 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:23.895988 | orchestrator | Sunday 22 June 2025 19:48:18 +0000 (0:00:00.194) 0:00:08.242 *********** 2025-06-22 19:48:23.895999 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.896009 | orchestrator | 2025-06-22 19:48:23.896020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:23.896031 | orchestrator | Sunday 22 June 2025 19:48:18 +0000 (0:00:00.186) 0:00:08.428 *********** 2025-06-22 19:48:23.896042 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.896052 | orchestrator | 2025-06-22 19:48:23.896063 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:23.896074 | orchestrator | Sunday 22 June 2025 19:48:18 +0000 (0:00:00.200) 0:00:08.629 *********** 2025-06-22 19:48:23.896112 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.896124 | orchestrator | 2025-06-22 19:48:23.896146 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-22 19:48:23.896157 | orchestrator | Sunday 22 June 2025 19:48:18 +0000 (0:00:00.216) 0:00:08.846 *********** 2025-06-22 19:48:23.896168 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-22 19:48:23.896189 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-22 19:48:23.896200 | orchestrator | 2025-06-22 19:48:23.896210 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-22 19:48:23.896221 | orchestrator | Sunday 22 June 2025 19:48:19 +0000 (0:00:00.156) 0:00:09.002 *********** 2025-06-22 19:48:23.896232 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.896243 | orchestrator | 2025-06-22 19:48:23.896254 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-22 19:48:23.896265 | orchestrator | Sunday 22 June 2025 19:48:19 +0000 (0:00:00.130) 0:00:09.132 *********** 2025-06-22 19:48:23.896276 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.896287 | orchestrator | 2025-06-22 19:48:23.896298 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-22 19:48:23.896348 | orchestrator | Sunday 22 June 2025 19:48:19 +0000 (0:00:00.119) 0:00:09.252 *********** 2025-06-22 19:48:23.896361 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.896374 | orchestrator | 2025-06-22 19:48:23.896386 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-22 19:48:23.896398 | orchestrator | Sunday 22 June 2025 19:48:19 +0000 (0:00:00.130) 0:00:09.383 *********** 2025-06-22 19:48:23.896411 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:23.896423 | orchestrator | 2025-06-22 19:48:23.896435 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-22 19:48:23.896446 | orchestrator | Sunday 22 June 2025 19:48:19 +0000 (0:00:00.138) 0:00:09.521 *********** 2025-06-22 19:48:23.896459 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '988500a7-3c26-5f89-b599-1c63900dc902'}}) 2025-06-22 19:48:23.896471 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f1623286-8630-50a6-960f-aa7fe8c22ac9'}}) 2025-06-22 19:48:23.896483 | orchestrator | 2025-06-22 19:48:23.896495 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-22 19:48:23.896507 | orchestrator | Sunday 22 June 2025 19:48:19 +0000 (0:00:00.147) 0:00:09.669 *********** 2025-06-22 19:48:23.896520 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '988500a7-3c26-5f89-b599-1c63900dc902'}})  2025-06-22 19:48:23.896539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f1623286-8630-50a6-960f-aa7fe8c22ac9'}})  2025-06-22 19:48:23.896551 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.896562 | orchestrator | 2025-06-22 19:48:23.896574 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-22 19:48:23.896586 | orchestrator | Sunday 22 June 2025 19:48:19 +0000 (0:00:00.137) 0:00:09.806 *********** 2025-06-22 19:48:23.896598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '988500a7-3c26-5f89-b599-1c63900dc902'}})  2025-06-22 19:48:23.896610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f1623286-8630-50a6-960f-aa7fe8c22ac9'}})  2025-06-22 19:48:23.896622 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.896634 | orchestrator | 2025-06-22 19:48:23.896646 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-22 19:48:23.896658 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.152) 0:00:09.958 *********** 2025-06-22 19:48:23.896671 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '988500a7-3c26-5f89-b599-1c63900dc902'}})  2025-06-22 19:48:23.896684 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f1623286-8630-50a6-960f-aa7fe8c22ac9'}})  2025-06-22 19:48:23.896695 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.896706 | orchestrator | 2025-06-22 19:48:23.896735 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-22 19:48:23.896748 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.272) 0:00:10.231 *********** 2025-06-22 19:48:23.896758 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:23.896769 | orchestrator | 2025-06-22 19:48:23.896780 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-22 19:48:23.896791 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.120) 0:00:10.351 *********** 2025-06-22 19:48:23.896801 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:23.896812 | orchestrator | 2025-06-22 19:48:23.896823 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-22 19:48:23.896834 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.124) 0:00:10.476 *********** 2025-06-22 19:48:23.896844 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.896855 | orchestrator | 2025-06-22 19:48:23.896866 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-22 19:48:23.896877 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.127) 0:00:10.604 *********** 2025-06-22 19:48:23.896896 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.896907 | orchestrator | 2025-06-22 19:48:23.896918 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-22 19:48:23.896929 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.131) 0:00:10.735 *********** 2025-06-22 19:48:23.896940 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.896950 | orchestrator | 2025-06-22 19:48:23.896967 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-22 19:48:23.896978 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.131) 0:00:10.867 *********** 2025-06-22 19:48:23.896989 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:48:23.897000 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:48:23.897010 | orchestrator |  "sdb": { 2025-06-22 19:48:23.897022 | orchestrator |  "osd_lvm_uuid": "988500a7-3c26-5f89-b599-1c63900dc902" 2025-06-22 19:48:23.897033 | orchestrator |  }, 2025-06-22 19:48:23.897043 | orchestrator |  "sdc": { 2025-06-22 19:48:23.897054 | orchestrator |  "osd_lvm_uuid": "f1623286-8630-50a6-960f-aa7fe8c22ac9" 2025-06-22 19:48:23.897065 | orchestrator |  } 2025-06-22 19:48:23.897076 | orchestrator |  } 2025-06-22 19:48:23.897101 | orchestrator | } 2025-06-22 19:48:23.897112 | orchestrator | 2025-06-22 19:48:23.897123 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-22 19:48:23.897134 | orchestrator | Sunday 22 June 2025 19:48:21 +0000 (0:00:00.127) 0:00:10.994 *********** 2025-06-22 19:48:23.897145 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.897156 | orchestrator | 2025-06-22 19:48:23.897166 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-22 19:48:23.897177 | orchestrator | Sunday 22 June 2025 19:48:21 +0000 (0:00:00.126) 0:00:11.120 *********** 2025-06-22 19:48:23.897188 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.897199 | orchestrator | 2025-06-22 19:48:23.897210 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-22 19:48:23.897220 | orchestrator | Sunday 22 June 2025 19:48:21 +0000 (0:00:00.123) 0:00:11.244 *********** 2025-06-22 19:48:23.897231 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.897242 | orchestrator | 2025-06-22 19:48:23.897253 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-22 19:48:23.897263 | orchestrator | Sunday 22 June 2025 19:48:21 +0000 (0:00:00.129) 0:00:11.374 *********** 2025-06-22 19:48:23.897278 | orchestrator | changed: [testbed-node-3] => { 2025-06-22 19:48:23.897289 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-22 19:48:23.897300 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:48:23.897311 | orchestrator |  "sdb": { 2025-06-22 19:48:23.897322 | orchestrator |  "osd_lvm_uuid": "988500a7-3c26-5f89-b599-1c63900dc902" 2025-06-22 19:48:23.897333 | orchestrator |  }, 2025-06-22 19:48:23.897344 | orchestrator |  "sdc": { 2025-06-22 19:48:23.897355 | orchestrator |  "osd_lvm_uuid": "f1623286-8630-50a6-960f-aa7fe8c22ac9" 2025-06-22 19:48:23.897366 | orchestrator |  } 2025-06-22 19:48:23.897377 | orchestrator |  }, 2025-06-22 19:48:23.897387 | orchestrator |  "lvm_volumes": [ 2025-06-22 19:48:23.897398 | orchestrator |  { 2025-06-22 19:48:23.897409 | orchestrator |  "data": "osd-block-988500a7-3c26-5f89-b599-1c63900dc902", 2025-06-22 19:48:23.897420 | orchestrator |  "data_vg": "ceph-988500a7-3c26-5f89-b599-1c63900dc902" 2025-06-22 19:48:23.897431 | orchestrator |  }, 2025-06-22 19:48:23.897441 | orchestrator |  { 2025-06-22 19:48:23.897452 | orchestrator |  "data": "osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9", 2025-06-22 19:48:23.897463 | orchestrator |  "data_vg": "ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9" 2025-06-22 19:48:23.897474 | orchestrator |  } 2025-06-22 19:48:23.897485 | orchestrator |  ] 2025-06-22 19:48:23.897502 | orchestrator |  } 2025-06-22 19:48:23.897513 | orchestrator | } 2025-06-22 19:48:23.897524 | orchestrator | 2025-06-22 19:48:23.897534 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-22 19:48:23.897546 | orchestrator | Sunday 22 June 2025 19:48:21 +0000 (0:00:00.191) 0:00:11.565 *********** 2025-06-22 19:48:23.897556 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 19:48:23.897567 | orchestrator | 2025-06-22 19:48:23.897578 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-22 19:48:23.897589 | orchestrator | 2025-06-22 19:48:23.897599 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:48:23.897610 | orchestrator | Sunday 22 June 2025 19:48:23 +0000 (0:00:01.832) 0:00:13.398 *********** 2025-06-22 19:48:23.897621 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-22 19:48:23.897632 | orchestrator | 2025-06-22 19:48:23.897643 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:48:23.897653 | orchestrator | Sunday 22 June 2025 19:48:23 +0000 (0:00:00.232) 0:00:13.631 *********** 2025-06-22 19:48:23.897664 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:23.897675 | orchestrator | 2025-06-22 19:48:23.897686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:23.897703 | orchestrator | Sunday 22 June 2025 19:48:23 +0000 (0:00:00.210) 0:00:13.841 *********** 2025-06-22 19:48:30.820960 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-22 19:48:30.821056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-22 19:48:30.821072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-22 19:48:30.821085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-22 19:48:30.821136 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-22 19:48:30.821149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-22 19:48:30.821160 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-22 19:48:30.821171 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-22 19:48:30.821182 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-22 19:48:30.821194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-22 19:48:30.821205 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-22 19:48:30.821216 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-22 19:48:30.821227 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-22 19:48:30.821238 | orchestrator | 2025-06-22 19:48:30.821250 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821262 | orchestrator | Sunday 22 June 2025 19:48:24 +0000 (0:00:00.367) 0:00:14.208 *********** 2025-06-22 19:48:30.821274 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.821286 | orchestrator | 2025-06-22 19:48:30.821297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821309 | orchestrator | Sunday 22 June 2025 19:48:24 +0000 (0:00:00.217) 0:00:14.426 *********** 2025-06-22 19:48:30.821320 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.821331 | orchestrator | 2025-06-22 19:48:30.821342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821353 | orchestrator | Sunday 22 June 2025 19:48:24 +0000 (0:00:00.187) 0:00:14.614 *********** 2025-06-22 19:48:30.821364 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.821375 | orchestrator | 2025-06-22 19:48:30.821387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821420 | orchestrator | Sunday 22 June 2025 19:48:24 +0000 (0:00:00.190) 0:00:14.804 *********** 2025-06-22 19:48:30.821432 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.821443 | orchestrator | 2025-06-22 19:48:30.821454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821465 | orchestrator | Sunday 22 June 2025 19:48:25 +0000 (0:00:00.187) 0:00:14.992 *********** 2025-06-22 19:48:30.821476 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.821487 | orchestrator | 2025-06-22 19:48:30.821498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821509 | orchestrator | Sunday 22 June 2025 19:48:25 +0000 (0:00:00.200) 0:00:15.192 *********** 2025-06-22 19:48:30.821520 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.821531 | orchestrator | 2025-06-22 19:48:30.821542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821553 | orchestrator | Sunday 22 June 2025 19:48:25 +0000 (0:00:00.523) 0:00:15.715 *********** 2025-06-22 19:48:30.821563 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.821574 | orchestrator | 2025-06-22 19:48:30.821585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821596 | orchestrator | Sunday 22 June 2025 19:48:25 +0000 (0:00:00.194) 0:00:15.909 *********** 2025-06-22 19:48:30.821607 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.821618 | orchestrator | 2025-06-22 19:48:30.821629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821641 | orchestrator | Sunday 22 June 2025 19:48:26 +0000 (0:00:00.185) 0:00:16.095 *********** 2025-06-22 19:48:30.821652 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da) 2025-06-22 19:48:30.821663 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da) 2025-06-22 19:48:30.821674 | orchestrator | 2025-06-22 19:48:30.821685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821696 | orchestrator | Sunday 22 June 2025 19:48:26 +0000 (0:00:00.413) 0:00:16.508 *********** 2025-06-22 19:48:30.821707 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9d381e45-09fd-4a20-ab1c-6f33bb7ad47a) 2025-06-22 19:48:30.821718 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9d381e45-09fd-4a20-ab1c-6f33bb7ad47a) 2025-06-22 19:48:30.821729 | orchestrator | 2025-06-22 19:48:30.821740 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821751 | orchestrator | Sunday 22 June 2025 19:48:26 +0000 (0:00:00.397) 0:00:16.906 *********** 2025-06-22 19:48:30.821762 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ca04149b-3774-4fe5-a4a8-e7007e740a3b) 2025-06-22 19:48:30.821773 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ca04149b-3774-4fe5-a4a8-e7007e740a3b) 2025-06-22 19:48:30.821784 | orchestrator | 2025-06-22 19:48:30.821794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821805 | orchestrator | Sunday 22 June 2025 19:48:27 +0000 (0:00:00.387) 0:00:17.293 *********** 2025-06-22 19:48:30.821834 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_10758f5a-a518-4894-b68c-79c541e050d1) 2025-06-22 19:48:30.821846 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_10758f5a-a518-4894-b68c-79c541e050d1) 2025-06-22 19:48:30.821857 | orchestrator | 2025-06-22 19:48:30.821883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:30.821895 | orchestrator | Sunday 22 June 2025 19:48:27 +0000 (0:00:00.392) 0:00:17.685 *********** 2025-06-22 19:48:30.821913 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:48:30.821934 | orchestrator | 2025-06-22 19:48:30.821954 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:30.821973 | orchestrator | Sunday 22 June 2025 19:48:28 +0000 (0:00:00.280) 0:00:17.965 *********** 2025-06-22 19:48:30.821994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-22 19:48:30.822005 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-22 19:48:30.822069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-22 19:48:30.822082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-22 19:48:30.822093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-22 19:48:30.822128 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-22 19:48:30.822139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-22 19:48:30.822150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-22 19:48:30.822160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-22 19:48:30.822171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-22 19:48:30.822182 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-22 19:48:30.822192 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-22 19:48:30.822203 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-22 19:48:30.822214 | orchestrator | 2025-06-22 19:48:30.822225 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:30.822236 | orchestrator | Sunday 22 June 2025 19:48:28 +0000 (0:00:00.343) 0:00:18.309 *********** 2025-06-22 19:48:30.822247 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.822258 | orchestrator | 2025-06-22 19:48:30.822268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:30.822279 | orchestrator | Sunday 22 June 2025 19:48:28 +0000 (0:00:00.165) 0:00:18.474 *********** 2025-06-22 19:48:30.822290 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.822301 | orchestrator | 2025-06-22 19:48:30.822312 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:30.822323 | orchestrator | Sunday 22 June 2025 19:48:28 +0000 (0:00:00.457) 0:00:18.932 *********** 2025-06-22 19:48:30.822333 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.822344 | orchestrator | 2025-06-22 19:48:30.822355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:30.822366 | orchestrator | Sunday 22 June 2025 19:48:29 +0000 (0:00:00.179) 0:00:19.111 *********** 2025-06-22 19:48:30.822377 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.822388 | orchestrator | 2025-06-22 19:48:30.822398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:30.822409 | orchestrator | Sunday 22 June 2025 19:48:29 +0000 (0:00:00.175) 0:00:19.287 *********** 2025-06-22 19:48:30.822420 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.822431 | orchestrator | 2025-06-22 19:48:30.822442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:30.822452 | orchestrator | Sunday 22 June 2025 19:48:29 +0000 (0:00:00.186) 0:00:19.474 *********** 2025-06-22 19:48:30.822463 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.822474 | orchestrator | 2025-06-22 19:48:30.822485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:30.822496 | orchestrator | Sunday 22 June 2025 19:48:29 +0000 (0:00:00.170) 0:00:19.644 *********** 2025-06-22 19:48:30.822506 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.822517 | orchestrator | 2025-06-22 19:48:30.822528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:30.822539 | orchestrator | Sunday 22 June 2025 19:48:29 +0000 (0:00:00.176) 0:00:19.821 *********** 2025-06-22 19:48:30.822558 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.822568 | orchestrator | 2025-06-22 19:48:30.822579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:30.822590 | orchestrator | Sunday 22 June 2025 19:48:30 +0000 (0:00:00.176) 0:00:19.997 *********** 2025-06-22 19:48:30.822601 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-22 19:48:30.822613 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-22 19:48:30.822624 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-22 19:48:30.822635 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-22 19:48:30.822646 | orchestrator | 2025-06-22 19:48:30.822657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:30.822667 | orchestrator | Sunday 22 June 2025 19:48:30 +0000 (0:00:00.579) 0:00:20.576 *********** 2025-06-22 19:48:30.822678 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:30.822689 | orchestrator | 2025-06-22 19:48:30.822709 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:37.323721 | orchestrator | Sunday 22 June 2025 19:48:30 +0000 (0:00:00.188) 0:00:20.765 *********** 2025-06-22 19:48:37.323842 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.323871 | orchestrator | 2025-06-22 19:48:37.323891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:37.323911 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:00.226) 0:00:20.992 *********** 2025-06-22 19:48:37.323929 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.323948 | orchestrator | 2025-06-22 19:48:37.323968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:37.323987 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:00.197) 0:00:21.189 *********** 2025-06-22 19:48:37.324006 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.324025 | orchestrator | 2025-06-22 19:48:37.324064 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-22 19:48:37.324086 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:00.220) 0:00:21.410 *********** 2025-06-22 19:48:37.324172 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-22 19:48:37.324187 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-22 19:48:37.324198 | orchestrator | 2025-06-22 19:48:37.324209 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-22 19:48:37.324220 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:00.312) 0:00:21.723 *********** 2025-06-22 19:48:37.324231 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.324242 | orchestrator | 2025-06-22 19:48:37.324255 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-22 19:48:37.324267 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:00.126) 0:00:21.850 *********** 2025-06-22 19:48:37.324279 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.324291 | orchestrator | 2025-06-22 19:48:37.324303 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-22 19:48:37.324315 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.133) 0:00:21.983 *********** 2025-06-22 19:48:37.324327 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.324339 | orchestrator | 2025-06-22 19:48:37.324352 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-22 19:48:37.324364 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.136) 0:00:22.120 *********** 2025-06-22 19:48:37.324376 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:37.324389 | orchestrator | 2025-06-22 19:48:37.324401 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-22 19:48:37.324414 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.141) 0:00:22.261 *********** 2025-06-22 19:48:37.324426 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '809c9636-3d83-5d3b-8a98-356a4387ae79'}}) 2025-06-22 19:48:37.324438 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'}}) 2025-06-22 19:48:37.324469 | orchestrator | 2025-06-22 19:48:37.324482 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-22 19:48:37.324494 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.172) 0:00:22.433 *********** 2025-06-22 19:48:37.324507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '809c9636-3d83-5d3b-8a98-356a4387ae79'}})  2025-06-22 19:48:37.324521 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'}})  2025-06-22 19:48:37.324533 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.324544 | orchestrator | 2025-06-22 19:48:37.324555 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-22 19:48:37.324571 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.152) 0:00:22.586 *********** 2025-06-22 19:48:37.324591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '809c9636-3d83-5d3b-8a98-356a4387ae79'}})  2025-06-22 19:48:37.324609 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'}})  2025-06-22 19:48:37.324627 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.324646 | orchestrator | 2025-06-22 19:48:37.324665 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-22 19:48:37.324685 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.149) 0:00:22.736 *********** 2025-06-22 19:48:37.324703 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '809c9636-3d83-5d3b-8a98-356a4387ae79'}})  2025-06-22 19:48:37.324719 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'}})  2025-06-22 19:48:37.324731 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.324742 | orchestrator | 2025-06-22 19:48:37.324752 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-22 19:48:37.324763 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.147) 0:00:22.884 *********** 2025-06-22 19:48:37.324773 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:37.324784 | orchestrator | 2025-06-22 19:48:37.324795 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-22 19:48:37.324805 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.147) 0:00:23.031 *********** 2025-06-22 19:48:37.324816 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:37.324826 | orchestrator | 2025-06-22 19:48:37.324837 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-22 19:48:37.324848 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.146) 0:00:23.178 *********** 2025-06-22 19:48:37.324858 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.324869 | orchestrator | 2025-06-22 19:48:37.324900 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-22 19:48:37.324911 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.140) 0:00:23.318 *********** 2025-06-22 19:48:37.324922 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.324932 | orchestrator | 2025-06-22 19:48:37.324943 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-22 19:48:37.324954 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.367) 0:00:23.686 *********** 2025-06-22 19:48:37.324964 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.324975 | orchestrator | 2025-06-22 19:48:37.324986 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-22 19:48:37.324997 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.134) 0:00:23.820 *********** 2025-06-22 19:48:37.325007 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:48:37.325018 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:48:37.325029 | orchestrator |  "sdb": { 2025-06-22 19:48:37.325039 | orchestrator |  "osd_lvm_uuid": "809c9636-3d83-5d3b-8a98-356a4387ae79" 2025-06-22 19:48:37.325062 | orchestrator |  }, 2025-06-22 19:48:37.325072 | orchestrator |  "sdc": { 2025-06-22 19:48:37.325083 | orchestrator |  "osd_lvm_uuid": "0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e" 2025-06-22 19:48:37.325094 | orchestrator |  } 2025-06-22 19:48:37.325126 | orchestrator |  } 2025-06-22 19:48:37.325138 | orchestrator | } 2025-06-22 19:48:37.325149 | orchestrator | 2025-06-22 19:48:37.325160 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-22 19:48:37.325171 | orchestrator | Sunday 22 June 2025 19:48:34 +0000 (0:00:00.149) 0:00:23.970 *********** 2025-06-22 19:48:37.325181 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.325192 | orchestrator | 2025-06-22 19:48:37.325202 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-22 19:48:37.325213 | orchestrator | Sunday 22 June 2025 19:48:34 +0000 (0:00:00.144) 0:00:24.114 *********** 2025-06-22 19:48:37.325224 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.325234 | orchestrator | 2025-06-22 19:48:37.325245 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-22 19:48:37.325256 | orchestrator | Sunday 22 June 2025 19:48:34 +0000 (0:00:00.154) 0:00:24.268 *********** 2025-06-22 19:48:37.325274 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:37.325285 | orchestrator | 2025-06-22 19:48:37.325296 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-22 19:48:37.325307 | orchestrator | Sunday 22 June 2025 19:48:34 +0000 (0:00:00.134) 0:00:24.402 *********** 2025-06-22 19:48:37.325317 | orchestrator | changed: [testbed-node-4] => { 2025-06-22 19:48:37.325328 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-22 19:48:37.325339 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:48:37.325349 | orchestrator |  "sdb": { 2025-06-22 19:48:37.325360 | orchestrator |  "osd_lvm_uuid": "809c9636-3d83-5d3b-8a98-356a4387ae79" 2025-06-22 19:48:37.325371 | orchestrator |  }, 2025-06-22 19:48:37.325382 | orchestrator |  "sdc": { 2025-06-22 19:48:37.325392 | orchestrator |  "osd_lvm_uuid": "0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e" 2025-06-22 19:48:37.325403 | orchestrator |  } 2025-06-22 19:48:37.325413 | orchestrator |  }, 2025-06-22 19:48:37.325424 | orchestrator |  "lvm_volumes": [ 2025-06-22 19:48:37.325434 | orchestrator |  { 2025-06-22 19:48:37.325445 | orchestrator |  "data": "osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79", 2025-06-22 19:48:37.325456 | orchestrator |  "data_vg": "ceph-809c9636-3d83-5d3b-8a98-356a4387ae79" 2025-06-22 19:48:37.325467 | orchestrator |  }, 2025-06-22 19:48:37.325477 | orchestrator |  { 2025-06-22 19:48:37.325488 | orchestrator |  "data": "osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e", 2025-06-22 19:48:37.325499 | orchestrator |  "data_vg": "ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e" 2025-06-22 19:48:37.325509 | orchestrator |  } 2025-06-22 19:48:37.325520 | orchestrator |  ] 2025-06-22 19:48:37.325531 | orchestrator |  } 2025-06-22 19:48:37.325541 | orchestrator | } 2025-06-22 19:48:37.325552 | orchestrator | 2025-06-22 19:48:37.325563 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-22 19:48:37.325573 | orchestrator | Sunday 22 June 2025 19:48:34 +0000 (0:00:00.214) 0:00:24.617 *********** 2025-06-22 19:48:37.325584 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-22 19:48:37.325595 | orchestrator | 2025-06-22 19:48:37.325605 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-22 19:48:37.325616 | orchestrator | 2025-06-22 19:48:37.325626 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:48:37.325637 | orchestrator | Sunday 22 June 2025 19:48:35 +0000 (0:00:01.135) 0:00:25.753 *********** 2025-06-22 19:48:37.325648 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-22 19:48:37.325658 | orchestrator | 2025-06-22 19:48:37.325669 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:48:37.325686 | orchestrator | Sunday 22 June 2025 19:48:36 +0000 (0:00:00.480) 0:00:26.234 *********** 2025-06-22 19:48:37.325697 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:48:37.325708 | orchestrator | 2025-06-22 19:48:37.325719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:37.325730 | orchestrator | Sunday 22 June 2025 19:48:36 +0000 (0:00:00.673) 0:00:26.908 *********** 2025-06-22 19:48:37.325740 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-22 19:48:37.325751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-22 19:48:37.325761 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-22 19:48:37.325772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-22 19:48:37.325782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-22 19:48:37.325793 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-22 19:48:37.325810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-22 19:48:44.672756 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-22 19:48:44.672852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-22 19:48:44.672867 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-22 19:48:44.672878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-22 19:48:44.672889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-22 19:48:44.672900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-22 19:48:44.672911 | orchestrator | 2025-06-22 19:48:44.672922 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.672934 | orchestrator | Sunday 22 June 2025 19:48:37 +0000 (0:00:00.355) 0:00:27.263 *********** 2025-06-22 19:48:44.672945 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.672956 | orchestrator | 2025-06-22 19:48:44.672967 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.672978 | orchestrator | Sunday 22 June 2025 19:48:37 +0000 (0:00:00.182) 0:00:27.446 *********** 2025-06-22 19:48:44.672989 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.673000 | orchestrator | 2025-06-22 19:48:44.673011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.673022 | orchestrator | Sunday 22 June 2025 19:48:37 +0000 (0:00:00.188) 0:00:27.635 *********** 2025-06-22 19:48:44.673033 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.673044 | orchestrator | 2025-06-22 19:48:44.673055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.673065 | orchestrator | Sunday 22 June 2025 19:48:37 +0000 (0:00:00.192) 0:00:27.827 *********** 2025-06-22 19:48:44.673077 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.673088 | orchestrator | 2025-06-22 19:48:44.673099 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.673110 | orchestrator | Sunday 22 June 2025 19:48:38 +0000 (0:00:00.165) 0:00:27.992 *********** 2025-06-22 19:48:44.673141 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.673152 | orchestrator | 2025-06-22 19:48:44.673163 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.673174 | orchestrator | Sunday 22 June 2025 19:48:38 +0000 (0:00:00.178) 0:00:28.171 *********** 2025-06-22 19:48:44.673185 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.673196 | orchestrator | 2025-06-22 19:48:44.673207 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.673240 | orchestrator | Sunday 22 June 2025 19:48:38 +0000 (0:00:00.189) 0:00:28.360 *********** 2025-06-22 19:48:44.673252 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.673263 | orchestrator | 2025-06-22 19:48:44.673274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.673284 | orchestrator | Sunday 22 June 2025 19:48:38 +0000 (0:00:00.184) 0:00:28.545 *********** 2025-06-22 19:48:44.673295 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.673306 | orchestrator | 2025-06-22 19:48:44.673319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.673331 | orchestrator | Sunday 22 June 2025 19:48:38 +0000 (0:00:00.214) 0:00:28.759 *********** 2025-06-22 19:48:44.673343 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6) 2025-06-22 19:48:44.673356 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6) 2025-06-22 19:48:44.673368 | orchestrator | 2025-06-22 19:48:44.673381 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.673394 | orchestrator | Sunday 22 June 2025 19:48:39 +0000 (0:00:00.517) 0:00:29.277 *********** 2025-06-22 19:48:44.673405 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_986f77d9-7eeb-491e-bdbe-4c9e8ad066d2) 2025-06-22 19:48:44.673433 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_986f77d9-7eeb-491e-bdbe-4c9e8ad066d2) 2025-06-22 19:48:44.673446 | orchestrator | 2025-06-22 19:48:44.673458 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.673471 | orchestrator | Sunday 22 June 2025 19:48:39 +0000 (0:00:00.675) 0:00:29.953 *********** 2025-06-22 19:48:44.673482 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f12434e6-788f-4ffb-a434-d641146d84ae) 2025-06-22 19:48:44.673501 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f12434e6-788f-4ffb-a434-d641146d84ae) 2025-06-22 19:48:44.673513 | orchestrator | 2025-06-22 19:48:44.673525 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.673537 | orchestrator | Sunday 22 June 2025 19:48:40 +0000 (0:00:00.391) 0:00:30.344 *********** 2025-06-22 19:48:44.673549 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b3712533-4ba6-4a13-8d22-1afd9c8ce6f2) 2025-06-22 19:48:44.673562 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b3712533-4ba6-4a13-8d22-1afd9c8ce6f2) 2025-06-22 19:48:44.673574 | orchestrator | 2025-06-22 19:48:44.673586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:44.673598 | orchestrator | Sunday 22 June 2025 19:48:40 +0000 (0:00:00.420) 0:00:30.765 *********** 2025-06-22 19:48:44.673610 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:48:44.673622 | orchestrator | 2025-06-22 19:48:44.673633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.673646 | orchestrator | Sunday 22 June 2025 19:48:41 +0000 (0:00:00.288) 0:00:31.054 *********** 2025-06-22 19:48:44.673675 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-22 19:48:44.673687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-22 19:48:44.673698 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-22 19:48:44.673709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-22 19:48:44.673719 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-22 19:48:44.673730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-22 19:48:44.673740 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-22 19:48:44.673751 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-22 19:48:44.673770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-22 19:48:44.673781 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-22 19:48:44.673791 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-22 19:48:44.673802 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-22 19:48:44.673813 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-22 19:48:44.673823 | orchestrator | 2025-06-22 19:48:44.673834 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.673845 | orchestrator | Sunday 22 June 2025 19:48:41 +0000 (0:00:00.366) 0:00:31.420 *********** 2025-06-22 19:48:44.673855 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.673866 | orchestrator | 2025-06-22 19:48:44.673877 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.673888 | orchestrator | Sunday 22 June 2025 19:48:41 +0000 (0:00:00.187) 0:00:31.607 *********** 2025-06-22 19:48:44.673898 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.673909 | orchestrator | 2025-06-22 19:48:44.673920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.673930 | orchestrator | Sunday 22 June 2025 19:48:41 +0000 (0:00:00.189) 0:00:31.797 *********** 2025-06-22 19:48:44.673941 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.673951 | orchestrator | 2025-06-22 19:48:44.673962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.673973 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.174) 0:00:31.971 *********** 2025-06-22 19:48:44.673984 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.673994 | orchestrator | 2025-06-22 19:48:44.674005 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.674062 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.189) 0:00:32.161 *********** 2025-06-22 19:48:44.674074 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.674085 | orchestrator | 2025-06-22 19:48:44.674096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.674106 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.193) 0:00:32.354 *********** 2025-06-22 19:48:44.674132 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.674143 | orchestrator | 2025-06-22 19:48:44.674154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.674165 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.496) 0:00:32.851 *********** 2025-06-22 19:48:44.674175 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.674186 | orchestrator | 2025-06-22 19:48:44.674197 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.674208 | orchestrator | Sunday 22 June 2025 19:48:43 +0000 (0:00:00.207) 0:00:33.058 *********** 2025-06-22 19:48:44.674218 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.674229 | orchestrator | 2025-06-22 19:48:44.674240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.674251 | orchestrator | Sunday 22 June 2025 19:48:43 +0000 (0:00:00.190) 0:00:33.248 *********** 2025-06-22 19:48:44.674261 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-22 19:48:44.674272 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-22 19:48:44.674283 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-22 19:48:44.674294 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-22 19:48:44.674304 | orchestrator | 2025-06-22 19:48:44.674315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.674326 | orchestrator | Sunday 22 June 2025 19:48:43 +0000 (0:00:00.607) 0:00:33.856 *********** 2025-06-22 19:48:44.674337 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.674354 | orchestrator | 2025-06-22 19:48:44.674365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.674376 | orchestrator | Sunday 22 June 2025 19:48:44 +0000 (0:00:00.184) 0:00:34.041 *********** 2025-06-22 19:48:44.674387 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.674397 | orchestrator | 2025-06-22 19:48:44.674408 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.674419 | orchestrator | Sunday 22 June 2025 19:48:44 +0000 (0:00:00.203) 0:00:34.244 *********** 2025-06-22 19:48:44.674430 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.674440 | orchestrator | 2025-06-22 19:48:44.674451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.674462 | orchestrator | Sunday 22 June 2025 19:48:44 +0000 (0:00:00.191) 0:00:34.436 *********** 2025-06-22 19:48:44.674472 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:44.674483 | orchestrator | 2025-06-22 19:48:44.674494 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-22 19:48:44.674512 | orchestrator | Sunday 22 June 2025 19:48:44 +0000 (0:00:00.182) 0:00:34.619 *********** 2025-06-22 19:48:48.365995 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-22 19:48:48.366209 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-22 19:48:48.366239 | orchestrator | 2025-06-22 19:48:48.366261 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-22 19:48:48.366282 | orchestrator | Sunday 22 June 2025 19:48:44 +0000 (0:00:00.141) 0:00:34.760 *********** 2025-06-22 19:48:48.366302 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:48.366321 | orchestrator | 2025-06-22 19:48:48.366343 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-22 19:48:48.366362 | orchestrator | Sunday 22 June 2025 19:48:44 +0000 (0:00:00.117) 0:00:34.878 *********** 2025-06-22 19:48:48.366382 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:48.366403 | orchestrator | 2025-06-22 19:48:48.366424 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-22 19:48:48.366444 | orchestrator | Sunday 22 June 2025 19:48:45 +0000 (0:00:00.124) 0:00:35.003 *********** 2025-06-22 19:48:48.366465 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:48.366486 | orchestrator | 2025-06-22 19:48:48.366507 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-22 19:48:48.366528 | orchestrator | Sunday 22 June 2025 19:48:45 +0000 (0:00:00.126) 0:00:35.129 *********** 2025-06-22 19:48:48.366583 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:48:48.366611 | orchestrator | 2025-06-22 19:48:48.366635 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-22 19:48:48.366658 | orchestrator | Sunday 22 June 2025 19:48:45 +0000 (0:00:00.267) 0:00:35.397 *********** 2025-06-22 19:48:48.366678 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b2f14396-315c-50f9-a6a7-8817318b41c3'}}) 2025-06-22 19:48:48.366700 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '60bbbdec-af53-55ad-b293-31f676104815'}}) 2025-06-22 19:48:48.366721 | orchestrator | 2025-06-22 19:48:48.366742 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-22 19:48:48.366765 | orchestrator | Sunday 22 June 2025 19:48:45 +0000 (0:00:00.163) 0:00:35.560 *********** 2025-06-22 19:48:48.366788 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b2f14396-315c-50f9-a6a7-8817318b41c3'}})  2025-06-22 19:48:48.366812 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '60bbbdec-af53-55ad-b293-31f676104815'}})  2025-06-22 19:48:48.366834 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:48.366855 | orchestrator | 2025-06-22 19:48:48.366878 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-22 19:48:48.366900 | orchestrator | Sunday 22 June 2025 19:48:45 +0000 (0:00:00.149) 0:00:35.709 *********** 2025-06-22 19:48:48.366951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b2f14396-315c-50f9-a6a7-8817318b41c3'}})  2025-06-22 19:48:48.366975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '60bbbdec-af53-55ad-b293-31f676104815'}})  2025-06-22 19:48:48.366997 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:48.367017 | orchestrator | 2025-06-22 19:48:48.367036 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-22 19:48:48.367058 | orchestrator | Sunday 22 June 2025 19:48:45 +0000 (0:00:00.143) 0:00:35.853 *********** 2025-06-22 19:48:48.367078 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b2f14396-315c-50f9-a6a7-8817318b41c3'}})  2025-06-22 19:48:48.367099 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '60bbbdec-af53-55ad-b293-31f676104815'}})  2025-06-22 19:48:48.367146 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:48.367164 | orchestrator | 2025-06-22 19:48:48.367180 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-22 19:48:48.367196 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:00.161) 0:00:36.015 *********** 2025-06-22 19:48:48.367213 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:48:48.367230 | orchestrator | 2025-06-22 19:48:48.367247 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-22 19:48:48.367266 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:00.133) 0:00:36.148 *********** 2025-06-22 19:48:48.367284 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:48:48.367303 | orchestrator | 2025-06-22 19:48:48.367323 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-22 19:48:48.367334 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:00.136) 0:00:36.285 *********** 2025-06-22 19:48:48.367345 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:48.367356 | orchestrator | 2025-06-22 19:48:48.367367 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-22 19:48:48.367378 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:00.143) 0:00:36.429 *********** 2025-06-22 19:48:48.367389 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:48.367399 | orchestrator | 2025-06-22 19:48:48.367413 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-22 19:48:48.367425 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:00.114) 0:00:36.543 *********** 2025-06-22 19:48:48.367435 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:48.367446 | orchestrator | 2025-06-22 19:48:48.367457 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-22 19:48:48.367467 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:00.119) 0:00:36.663 *********** 2025-06-22 19:48:48.367478 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:48:48.367489 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:48:48.367500 | orchestrator |  "sdb": { 2025-06-22 19:48:48.367511 | orchestrator |  "osd_lvm_uuid": "b2f14396-315c-50f9-a6a7-8817318b41c3" 2025-06-22 19:48:48.367543 | orchestrator |  }, 2025-06-22 19:48:48.367554 | orchestrator |  "sdc": { 2025-06-22 19:48:48.367565 | orchestrator |  "osd_lvm_uuid": "60bbbdec-af53-55ad-b293-31f676104815" 2025-06-22 19:48:48.367576 | orchestrator |  } 2025-06-22 19:48:48.367586 | orchestrator |  } 2025-06-22 19:48:48.367598 | orchestrator | } 2025-06-22 19:48:48.367608 | orchestrator | 2025-06-22 19:48:48.367619 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-22 19:48:48.367630 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:00.132) 0:00:36.795 *********** 2025-06-22 19:48:48.367641 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:48.367651 | orchestrator | 2025-06-22 19:48:48.367662 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-22 19:48:48.367673 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:00.119) 0:00:36.915 *********** 2025-06-22 19:48:48.367693 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:48.367704 | orchestrator | 2025-06-22 19:48:48.367715 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-22 19:48:48.367725 | orchestrator | Sunday 22 June 2025 19:48:47 +0000 (0:00:00.258) 0:00:37.173 *********** 2025-06-22 19:48:48.367736 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:48:48.367746 | orchestrator | 2025-06-22 19:48:48.367757 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-22 19:48:48.367767 | orchestrator | Sunday 22 June 2025 19:48:47 +0000 (0:00:00.110) 0:00:37.284 *********** 2025-06-22 19:48:48.367778 | orchestrator | changed: [testbed-node-5] => { 2025-06-22 19:48:48.367789 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-22 19:48:48.367798 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:48:48.367808 | orchestrator |  "sdb": { 2025-06-22 19:48:48.367817 | orchestrator |  "osd_lvm_uuid": "b2f14396-315c-50f9-a6a7-8817318b41c3" 2025-06-22 19:48:48.367827 | orchestrator |  }, 2025-06-22 19:48:48.367836 | orchestrator |  "sdc": { 2025-06-22 19:48:48.367846 | orchestrator |  "osd_lvm_uuid": "60bbbdec-af53-55ad-b293-31f676104815" 2025-06-22 19:48:48.367855 | orchestrator |  } 2025-06-22 19:48:48.367864 | orchestrator |  }, 2025-06-22 19:48:48.367874 | orchestrator |  "lvm_volumes": [ 2025-06-22 19:48:48.367883 | orchestrator |  { 2025-06-22 19:48:48.367893 | orchestrator |  "data": "osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3", 2025-06-22 19:48:48.367903 | orchestrator |  "data_vg": "ceph-b2f14396-315c-50f9-a6a7-8817318b41c3" 2025-06-22 19:48:48.367912 | orchestrator |  }, 2025-06-22 19:48:48.367921 | orchestrator |  { 2025-06-22 19:48:48.367931 | orchestrator |  "data": "osd-block-60bbbdec-af53-55ad-b293-31f676104815", 2025-06-22 19:48:48.367940 | orchestrator |  "data_vg": "ceph-60bbbdec-af53-55ad-b293-31f676104815" 2025-06-22 19:48:48.367950 | orchestrator |  } 2025-06-22 19:48:48.367959 | orchestrator |  ] 2025-06-22 19:48:48.367969 | orchestrator |  } 2025-06-22 19:48:48.367978 | orchestrator | } 2025-06-22 19:48:48.367988 | orchestrator | 2025-06-22 19:48:48.367997 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-22 19:48:48.368007 | orchestrator | Sunday 22 June 2025 19:48:47 +0000 (0:00:00.196) 0:00:37.480 *********** 2025-06-22 19:48:48.368016 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-22 19:48:48.368025 | orchestrator | 2025-06-22 19:48:48.368035 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:48:48.368045 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 19:48:48.368055 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 19:48:48.368065 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 19:48:48.368074 | orchestrator | 2025-06-22 19:48:48.368083 | orchestrator | 2025-06-22 19:48:48.368093 | orchestrator | 2025-06-22 19:48:48.368102 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:48:48.368112 | orchestrator | Sunday 22 June 2025 19:48:48 +0000 (0:00:00.823) 0:00:38.304 *********** 2025-06-22 19:48:48.368147 | orchestrator | =============================================================================== 2025-06-22 19:48:48.368163 | orchestrator | Write configuration file ------------------------------------------------ 3.79s 2025-06-22 19:48:48.368179 | orchestrator | Get initial list of available block devices ----------------------------- 1.12s 2025-06-22 19:48:48.368189 | orchestrator | Add known links to the list of available block devices ------------------ 1.09s 2025-06-22 19:48:48.368199 | orchestrator | Add known partitions to the list of available block devices ------------- 1.05s 2025-06-22 19:48:48.368217 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.96s 2025-06-22 19:48:48.368227 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2025-06-22 19:48:48.368236 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-06-22 19:48:48.368245 | orchestrator | Set WAL devices config data --------------------------------------------- 0.61s 2025-06-22 19:48:48.368255 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.61s 2025-06-22 19:48:48.368264 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2025-06-22 19:48:48.368273 | orchestrator | Print configuration data ------------------------------------------------ 0.60s 2025-06-22 19:48:48.368288 | orchestrator | Add known links to the list of available block devices ------------------ 0.59s 2025-06-22 19:48:48.368298 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.58s 2025-06-22 19:48:48.368307 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-06-22 19:48:48.368324 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.55s 2025-06-22 19:48:48.572826 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2025-06-22 19:48:48.572911 | orchestrator | Print DB devices -------------------------------------------------------- 0.54s 2025-06-22 19:48:48.572924 | orchestrator | Add known links to the list of available block devices ------------------ 0.53s 2025-06-22 19:48:48.572937 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2025-06-22 19:48:48.572948 | orchestrator | Add known links to the list of available block devices ------------------ 0.52s 2025-06-22 19:49:00.783611 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:49:00.783711 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:49:00.783725 | orchestrator | Registering Redlock._release_script 2025-06-22 19:49:00.847227 | orchestrator | 2025-06-22 19:49:00 | INFO  | Task 510e5928-99b6-4d1b-b5d3-e8022cc952b9 (sync inventory) is running in background. Output coming soon. 2025-06-22 19:49:20.923641 | orchestrator | 2025-06-22 19:49:02 | INFO  | Starting group_vars file reorganization 2025-06-22 19:49:20.923744 | orchestrator | 2025-06-22 19:49:02 | INFO  | Moved 0 file(s) to their respective directories 2025-06-22 19:49:20.923759 | orchestrator | 2025-06-22 19:49:02 | INFO  | Group_vars file reorganization completed 2025-06-22 19:49:20.923771 | orchestrator | 2025-06-22 19:49:04 | INFO  | Starting variable preparation from inventory 2025-06-22 19:49:20.923783 | orchestrator | 2025-06-22 19:49:06 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-22 19:49:20.923794 | orchestrator | 2025-06-22 19:49:06 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-22 19:49:20.923805 | orchestrator | 2025-06-22 19:49:06 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-22 19:49:20.923816 | orchestrator | 2025-06-22 19:49:06 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-22 19:49:20.923827 | orchestrator | 2025-06-22 19:49:06 | INFO  | Variable preparation completed 2025-06-22 19:49:20.923838 | orchestrator | 2025-06-22 19:49:07 | INFO  | Starting inventory overwrite handling 2025-06-22 19:49:20.923848 | orchestrator | 2025-06-22 19:49:07 | INFO  | Handling group overwrites in 99-overwrite 2025-06-22 19:49:20.923859 | orchestrator | 2025-06-22 19:49:07 | INFO  | Removing group frr:children from 60-generic 2025-06-22 19:49:20.923870 | orchestrator | 2025-06-22 19:49:07 | INFO  | Removing group storage:children from 50-kolla 2025-06-22 19:49:20.923881 | orchestrator | 2025-06-22 19:49:07 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-22 19:49:20.923892 | orchestrator | 2025-06-22 19:49:07 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-22 19:49:20.923930 | orchestrator | 2025-06-22 19:49:07 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-22 19:49:20.923970 | orchestrator | 2025-06-22 19:49:07 | INFO  | Handling group overwrites in 20-roles 2025-06-22 19:49:20.923998 | orchestrator | 2025-06-22 19:49:07 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-22 19:49:20.924009 | orchestrator | 2025-06-22 19:49:07 | INFO  | Removed 6 group(s) in total 2025-06-22 19:49:20.924020 | orchestrator | 2025-06-22 19:49:07 | INFO  | Inventory overwrite handling completed 2025-06-22 19:49:20.924031 | orchestrator | 2025-06-22 19:49:08 | INFO  | Starting merge of inventory files 2025-06-22 19:49:20.924042 | orchestrator | 2025-06-22 19:49:08 | INFO  | Inventory files merged successfully 2025-06-22 19:49:20.924053 | orchestrator | 2025-06-22 19:49:12 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-22 19:49:20.924079 | orchestrator | 2025-06-22 19:49:19 | INFO  | Successfully wrote ClusterShell configuration 2025-06-22 19:49:20.924091 | orchestrator | [master b4d0b3f] 2025-06-22-19-49 2025-06-22 19:49:20.924104 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-06-22 19:49:22.963760 | orchestrator | 2025-06-22 19:49:22 | INFO  | Task 71ff8ca9-ccfa-4bbb-8ad1-34fcb715d48c (ceph-create-lvm-devices) was prepared for execution. 2025-06-22 19:49:22.963845 | orchestrator | 2025-06-22 19:49:22 | INFO  | It takes a moment until task 71ff8ca9-ccfa-4bbb-8ad1-34fcb715d48c (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-22 19:49:34.689527 | orchestrator | 2025-06-22 19:49:34.689624 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-22 19:49:34.689639 | orchestrator | 2025-06-22 19:49:34.689649 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:49:34.689660 | orchestrator | Sunday 22 June 2025 19:49:27 +0000 (0:00:00.309) 0:00:00.309 *********** 2025-06-22 19:49:34.689670 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 19:49:34.689681 | orchestrator | 2025-06-22 19:49:34.689691 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:49:34.689700 | orchestrator | Sunday 22 June 2025 19:49:27 +0000 (0:00:00.243) 0:00:00.552 *********** 2025-06-22 19:49:34.689710 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:49:34.689721 | orchestrator | 2025-06-22 19:49:34.689732 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.689742 | orchestrator | Sunday 22 June 2025 19:49:27 +0000 (0:00:00.223) 0:00:00.776 *********** 2025-06-22 19:49:34.689752 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-22 19:49:34.689762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-22 19:49:34.689772 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-22 19:49:34.689782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-22 19:49:34.689791 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-22 19:49:34.689801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-22 19:49:34.689811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-22 19:49:34.689820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-22 19:49:34.689830 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-22 19:49:34.689840 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-22 19:49:34.689849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-22 19:49:34.689880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-22 19:49:34.689891 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-22 19:49:34.689900 | orchestrator | 2025-06-22 19:49:34.689910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.689920 | orchestrator | Sunday 22 June 2025 19:49:27 +0000 (0:00:00.405) 0:00:01.181 *********** 2025-06-22 19:49:34.689930 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.689940 | orchestrator | 2025-06-22 19:49:34.689949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.689959 | orchestrator | Sunday 22 June 2025 19:49:28 +0000 (0:00:00.487) 0:00:01.669 *********** 2025-06-22 19:49:34.689969 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.689978 | orchestrator | 2025-06-22 19:49:34.689988 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.689998 | orchestrator | Sunday 22 June 2025 19:49:28 +0000 (0:00:00.196) 0:00:01.866 *********** 2025-06-22 19:49:34.690007 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.690076 | orchestrator | 2025-06-22 19:49:34.690088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.690099 | orchestrator | Sunday 22 June 2025 19:49:28 +0000 (0:00:00.179) 0:00:02.046 *********** 2025-06-22 19:49:34.690110 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.690120 | orchestrator | 2025-06-22 19:49:34.690132 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.690143 | orchestrator | Sunday 22 June 2025 19:49:28 +0000 (0:00:00.200) 0:00:02.246 *********** 2025-06-22 19:49:34.690153 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.690190 | orchestrator | 2025-06-22 19:49:34.690202 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.690213 | orchestrator | Sunday 22 June 2025 19:49:29 +0000 (0:00:00.186) 0:00:02.432 *********** 2025-06-22 19:49:34.690224 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.690235 | orchestrator | 2025-06-22 19:49:34.690246 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.690257 | orchestrator | Sunday 22 June 2025 19:49:29 +0000 (0:00:00.198) 0:00:02.631 *********** 2025-06-22 19:49:34.690267 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.690278 | orchestrator | 2025-06-22 19:49:34.690289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.690299 | orchestrator | Sunday 22 June 2025 19:49:29 +0000 (0:00:00.199) 0:00:02.831 *********** 2025-06-22 19:49:34.690310 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.690321 | orchestrator | 2025-06-22 19:49:34.690332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.690343 | orchestrator | Sunday 22 June 2025 19:49:29 +0000 (0:00:00.203) 0:00:03.034 *********** 2025-06-22 19:49:34.690355 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3) 2025-06-22 19:49:34.690367 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3) 2025-06-22 19:49:34.690377 | orchestrator | 2025-06-22 19:49:34.690388 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.690399 | orchestrator | Sunday 22 June 2025 19:49:30 +0000 (0:00:00.411) 0:00:03.446 *********** 2025-06-22 19:49:34.690426 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d397d31c-b886-4607-b3cb-2d758622dade) 2025-06-22 19:49:34.690452 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d397d31c-b886-4607-b3cb-2d758622dade) 2025-06-22 19:49:34.690463 | orchestrator | 2025-06-22 19:49:34.690472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.690482 | orchestrator | Sunday 22 June 2025 19:49:30 +0000 (0:00:00.405) 0:00:03.852 *********** 2025-06-22 19:49:34.690501 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_66d4c0b6-de40-44d2-a991-376660387b3d) 2025-06-22 19:49:34.690510 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_66d4c0b6-de40-44d2-a991-376660387b3d) 2025-06-22 19:49:34.690520 | orchestrator | 2025-06-22 19:49:34.690530 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.690539 | orchestrator | Sunday 22 June 2025 19:49:31 +0000 (0:00:00.648) 0:00:04.500 *********** 2025-06-22 19:49:34.690549 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f438dec5-52e6-4e07-b468-2b34fd5e0bbc) 2025-06-22 19:49:34.690559 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f438dec5-52e6-4e07-b468-2b34fd5e0bbc) 2025-06-22 19:49:34.690568 | orchestrator | 2025-06-22 19:49:34.690578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:34.690587 | orchestrator | Sunday 22 June 2025 19:49:31 +0000 (0:00:00.676) 0:00:05.177 *********** 2025-06-22 19:49:34.690597 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:49:34.690607 | orchestrator | 2025-06-22 19:49:34.690616 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:34.690626 | orchestrator | Sunday 22 June 2025 19:49:32 +0000 (0:00:00.750) 0:00:05.927 *********** 2025-06-22 19:49:34.690635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-22 19:49:34.690645 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-22 19:49:34.690654 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-22 19:49:34.690664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-22 19:49:34.690674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-22 19:49:34.690683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-22 19:49:34.690693 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-22 19:49:34.690702 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-22 19:49:34.690712 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-22 19:49:34.690721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-22 19:49:34.690731 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-22 19:49:34.690741 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-22 19:49:34.690750 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-22 19:49:34.690759 | orchestrator | 2025-06-22 19:49:34.690769 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:34.690779 | orchestrator | Sunday 22 June 2025 19:49:33 +0000 (0:00:00.430) 0:00:06.358 *********** 2025-06-22 19:49:34.690788 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.690798 | orchestrator | 2025-06-22 19:49:34.690808 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:34.690817 | orchestrator | Sunday 22 June 2025 19:49:33 +0000 (0:00:00.198) 0:00:06.556 *********** 2025-06-22 19:49:34.690827 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.690836 | orchestrator | 2025-06-22 19:49:34.690845 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:34.690855 | orchestrator | Sunday 22 June 2025 19:49:33 +0000 (0:00:00.203) 0:00:06.760 *********** 2025-06-22 19:49:34.690865 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.690874 | orchestrator | 2025-06-22 19:49:34.690883 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:34.690902 | orchestrator | Sunday 22 June 2025 19:49:33 +0000 (0:00:00.215) 0:00:06.976 *********** 2025-06-22 19:49:34.690912 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.690921 | orchestrator | 2025-06-22 19:49:34.690931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:34.690940 | orchestrator | Sunday 22 June 2025 19:49:33 +0000 (0:00:00.197) 0:00:07.173 *********** 2025-06-22 19:49:34.690950 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.690960 | orchestrator | 2025-06-22 19:49:34.690969 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:34.690987 | orchestrator | Sunday 22 June 2025 19:49:34 +0000 (0:00:00.189) 0:00:07.362 *********** 2025-06-22 19:49:34.691003 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.691019 | orchestrator | 2025-06-22 19:49:34.691035 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:34.691048 | orchestrator | Sunday 22 June 2025 19:49:34 +0000 (0:00:00.196) 0:00:07.559 *********** 2025-06-22 19:49:34.691058 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:34.691067 | orchestrator | 2025-06-22 19:49:34.691077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:34.691087 | orchestrator | Sunday 22 June 2025 19:49:34 +0000 (0:00:00.205) 0:00:07.764 *********** 2025-06-22 19:49:34.691103 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.694692 | orchestrator | 2025-06-22 19:49:42.694838 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:42.694856 | orchestrator | Sunday 22 June 2025 19:49:34 +0000 (0:00:00.197) 0:00:07.962 *********** 2025-06-22 19:49:42.694869 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-22 19:49:42.694886 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-22 19:49:42.694897 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-22 19:49:42.694906 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-22 19:49:42.694915 | orchestrator | 2025-06-22 19:49:42.694924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:42.694933 | orchestrator | Sunday 22 June 2025 19:49:35 +0000 (0:00:01.087) 0:00:09.050 *********** 2025-06-22 19:49:42.694942 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.694951 | orchestrator | 2025-06-22 19:49:42.694960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:42.694969 | orchestrator | Sunday 22 June 2025 19:49:35 +0000 (0:00:00.205) 0:00:09.255 *********** 2025-06-22 19:49:42.694978 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.694986 | orchestrator | 2025-06-22 19:49:42.694995 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:42.695004 | orchestrator | Sunday 22 June 2025 19:49:36 +0000 (0:00:00.215) 0:00:09.470 *********** 2025-06-22 19:49:42.695013 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695021 | orchestrator | 2025-06-22 19:49:42.695030 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:42.695039 | orchestrator | Sunday 22 June 2025 19:49:36 +0000 (0:00:00.181) 0:00:09.652 *********** 2025-06-22 19:49:42.695048 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695056 | orchestrator | 2025-06-22 19:49:42.695065 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-22 19:49:42.695074 | orchestrator | Sunday 22 June 2025 19:49:36 +0000 (0:00:00.198) 0:00:09.851 *********** 2025-06-22 19:49:42.695082 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695091 | orchestrator | 2025-06-22 19:49:42.695100 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-22 19:49:42.695109 | orchestrator | Sunday 22 June 2025 19:49:36 +0000 (0:00:00.130) 0:00:09.982 *********** 2025-06-22 19:49:42.695118 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '988500a7-3c26-5f89-b599-1c63900dc902'}}) 2025-06-22 19:49:42.695127 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'f1623286-8630-50a6-960f-aa7fe8c22ac9'}}) 2025-06-22 19:49:42.695157 | orchestrator | 2025-06-22 19:49:42.695167 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-22 19:49:42.695213 | orchestrator | Sunday 22 June 2025 19:49:36 +0000 (0:00:00.184) 0:00:10.166 *********** 2025-06-22 19:49:42.695228 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'}) 2025-06-22 19:49:42.695238 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'}) 2025-06-22 19:49:42.695247 | orchestrator | 2025-06-22 19:49:42.695256 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-22 19:49:42.695264 | orchestrator | Sunday 22 June 2025 19:49:38 +0000 (0:00:01.976) 0:00:12.143 *********** 2025-06-22 19:49:42.695273 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:42.695283 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:42.695292 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695301 | orchestrator | 2025-06-22 19:49:42.695309 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-22 19:49:42.695318 | orchestrator | Sunday 22 June 2025 19:49:39 +0000 (0:00:00.155) 0:00:12.298 *********** 2025-06-22 19:49:42.695327 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'}) 2025-06-22 19:49:42.695335 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'}) 2025-06-22 19:49:42.695344 | orchestrator | 2025-06-22 19:49:42.695353 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-22 19:49:42.695362 | orchestrator | Sunday 22 June 2025 19:49:40 +0000 (0:00:01.489) 0:00:13.787 *********** 2025-06-22 19:49:42.695371 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:42.695380 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:42.695389 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695397 | orchestrator | 2025-06-22 19:49:42.695406 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-22 19:49:42.695415 | orchestrator | Sunday 22 June 2025 19:49:40 +0000 (0:00:00.161) 0:00:13.949 *********** 2025-06-22 19:49:42.695424 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695432 | orchestrator | 2025-06-22 19:49:42.695441 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-22 19:49:42.695465 | orchestrator | Sunday 22 June 2025 19:49:40 +0000 (0:00:00.137) 0:00:14.087 *********** 2025-06-22 19:49:42.695474 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:42.695483 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:42.695492 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695501 | orchestrator | 2025-06-22 19:49:42.695509 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-22 19:49:42.695518 | orchestrator | Sunday 22 June 2025 19:49:41 +0000 (0:00:00.384) 0:00:14.471 *********** 2025-06-22 19:49:42.695527 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695542 | orchestrator | 2025-06-22 19:49:42.695551 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-22 19:49:42.695560 | orchestrator | Sunday 22 June 2025 19:49:41 +0000 (0:00:00.141) 0:00:14.613 *********** 2025-06-22 19:49:42.695583 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:42.695593 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:42.695601 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695614 | orchestrator | 2025-06-22 19:49:42.695628 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-22 19:49:42.695640 | orchestrator | Sunday 22 June 2025 19:49:41 +0000 (0:00:00.162) 0:00:14.776 *********** 2025-06-22 19:49:42.695648 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695657 | orchestrator | 2025-06-22 19:49:42.695666 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-22 19:49:42.695674 | orchestrator | Sunday 22 June 2025 19:49:41 +0000 (0:00:00.143) 0:00:14.919 *********** 2025-06-22 19:49:42.695683 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:42.695693 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:42.695701 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695710 | orchestrator | 2025-06-22 19:49:42.695719 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-22 19:49:42.695727 | orchestrator | Sunday 22 June 2025 19:49:41 +0000 (0:00:00.152) 0:00:15.072 *********** 2025-06-22 19:49:42.695736 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:49:42.695745 | orchestrator | 2025-06-22 19:49:42.695754 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-22 19:49:42.695762 | orchestrator | Sunday 22 June 2025 19:49:41 +0000 (0:00:00.141) 0:00:15.214 *********** 2025-06-22 19:49:42.695771 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:42.695780 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:42.695788 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695797 | orchestrator | 2025-06-22 19:49:42.695806 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-22 19:49:42.695814 | orchestrator | Sunday 22 June 2025 19:49:42 +0000 (0:00:00.176) 0:00:15.390 *********** 2025-06-22 19:49:42.695823 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:42.695832 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:42.695840 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695849 | orchestrator | 2025-06-22 19:49:42.695858 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-22 19:49:42.695866 | orchestrator | Sunday 22 June 2025 19:49:42 +0000 (0:00:00.158) 0:00:15.549 *********** 2025-06-22 19:49:42.695875 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:42.695888 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:42.695903 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695912 | orchestrator | 2025-06-22 19:49:42.695921 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-22 19:49:42.695929 | orchestrator | Sunday 22 June 2025 19:49:42 +0000 (0:00:00.150) 0:00:15.700 *********** 2025-06-22 19:49:42.695939 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695953 | orchestrator | 2025-06-22 19:49:42.695965 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-22 19:49:42.695974 | orchestrator | Sunday 22 June 2025 19:49:42 +0000 (0:00:00.140) 0:00:15.840 *********** 2025-06-22 19:49:42.695983 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:42.695991 | orchestrator | 2025-06-22 19:49:42.696005 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-22 19:49:49.083858 | orchestrator | Sunday 22 June 2025 19:49:42 +0000 (0:00:00.128) 0:00:15.969 *********** 2025-06-22 19:49:49.083966 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.083980 | orchestrator | 2025-06-22 19:49:49.083991 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-22 19:49:49.084002 | orchestrator | Sunday 22 June 2025 19:49:42 +0000 (0:00:00.139) 0:00:16.109 *********** 2025-06-22 19:49:49.084012 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:49:49.084022 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-22 19:49:49.084033 | orchestrator | } 2025-06-22 19:49:49.084043 | orchestrator | 2025-06-22 19:49:49.084053 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-22 19:49:49.084063 | orchestrator | Sunday 22 June 2025 19:49:43 +0000 (0:00:00.337) 0:00:16.446 *********** 2025-06-22 19:49:49.084072 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:49:49.084082 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-22 19:49:49.084092 | orchestrator | } 2025-06-22 19:49:49.084102 | orchestrator | 2025-06-22 19:49:49.084111 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-22 19:49:49.084121 | orchestrator | Sunday 22 June 2025 19:49:43 +0000 (0:00:00.150) 0:00:16.596 *********** 2025-06-22 19:49:49.084131 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:49:49.084142 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-22 19:49:49.084152 | orchestrator | } 2025-06-22 19:49:49.084162 | orchestrator | 2025-06-22 19:49:49.084172 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-22 19:49:49.084243 | orchestrator | Sunday 22 June 2025 19:49:43 +0000 (0:00:00.145) 0:00:16.742 *********** 2025-06-22 19:49:49.084254 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:49:49.084264 | orchestrator | 2025-06-22 19:49:49.084274 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-22 19:49:49.084284 | orchestrator | Sunday 22 June 2025 19:49:44 +0000 (0:00:00.670) 0:00:17.412 *********** 2025-06-22 19:49:49.084293 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:49:49.084303 | orchestrator | 2025-06-22 19:49:49.084313 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-22 19:49:49.084323 | orchestrator | Sunday 22 June 2025 19:49:44 +0000 (0:00:00.528) 0:00:17.940 *********** 2025-06-22 19:49:49.084332 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:49:49.084342 | orchestrator | 2025-06-22 19:49:49.084352 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-22 19:49:49.084361 | orchestrator | Sunday 22 June 2025 19:49:45 +0000 (0:00:00.524) 0:00:18.464 *********** 2025-06-22 19:49:49.084371 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:49:49.084381 | orchestrator | 2025-06-22 19:49:49.084391 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-22 19:49:49.084402 | orchestrator | Sunday 22 June 2025 19:49:45 +0000 (0:00:00.150) 0:00:18.615 *********** 2025-06-22 19:49:49.084413 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.084424 | orchestrator | 2025-06-22 19:49:49.084436 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-22 19:49:49.084447 | orchestrator | Sunday 22 June 2025 19:49:45 +0000 (0:00:00.115) 0:00:18.730 *********** 2025-06-22 19:49:49.084481 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.084493 | orchestrator | 2025-06-22 19:49:49.084505 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-22 19:49:49.084516 | orchestrator | Sunday 22 June 2025 19:49:45 +0000 (0:00:00.111) 0:00:18.842 *********** 2025-06-22 19:49:49.084527 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:49:49.084539 | orchestrator |  "vgs_report": { 2025-06-22 19:49:49.084550 | orchestrator |  "vg": [] 2025-06-22 19:49:49.084561 | orchestrator |  } 2025-06-22 19:49:49.084573 | orchestrator | } 2025-06-22 19:49:49.084584 | orchestrator | 2025-06-22 19:49:49.084595 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-22 19:49:49.084606 | orchestrator | Sunday 22 June 2025 19:49:45 +0000 (0:00:00.148) 0:00:18.990 *********** 2025-06-22 19:49:49.084617 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.084628 | orchestrator | 2025-06-22 19:49:49.084638 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-22 19:49:49.084648 | orchestrator | Sunday 22 June 2025 19:49:45 +0000 (0:00:00.181) 0:00:19.172 *********** 2025-06-22 19:49:49.084657 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.084667 | orchestrator | 2025-06-22 19:49:49.084676 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-22 19:49:49.084686 | orchestrator | Sunday 22 June 2025 19:49:46 +0000 (0:00:00.143) 0:00:19.316 *********** 2025-06-22 19:49:49.084696 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.084705 | orchestrator | 2025-06-22 19:49:49.084715 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-22 19:49:49.084724 | orchestrator | Sunday 22 June 2025 19:49:46 +0000 (0:00:00.134) 0:00:19.450 *********** 2025-06-22 19:49:49.084734 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.084743 | orchestrator | 2025-06-22 19:49:49.084753 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-22 19:49:49.084763 | orchestrator | Sunday 22 June 2025 19:49:46 +0000 (0:00:00.371) 0:00:19.822 *********** 2025-06-22 19:49:49.084772 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.084781 | orchestrator | 2025-06-22 19:49:49.084791 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-22 19:49:49.084801 | orchestrator | Sunday 22 June 2025 19:49:46 +0000 (0:00:00.141) 0:00:19.964 *********** 2025-06-22 19:49:49.084811 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.084820 | orchestrator | 2025-06-22 19:49:49.084830 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-22 19:49:49.084839 | orchestrator | Sunday 22 June 2025 19:49:46 +0000 (0:00:00.143) 0:00:20.107 *********** 2025-06-22 19:49:49.084849 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.084859 | orchestrator | 2025-06-22 19:49:49.084868 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-22 19:49:49.084878 | orchestrator | Sunday 22 June 2025 19:49:46 +0000 (0:00:00.140) 0:00:20.248 *********** 2025-06-22 19:49:49.084887 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.084897 | orchestrator | 2025-06-22 19:49:49.084907 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-22 19:49:49.084933 | orchestrator | Sunday 22 June 2025 19:49:47 +0000 (0:00:00.143) 0:00:20.392 *********** 2025-06-22 19:49:49.084944 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.084953 | orchestrator | 2025-06-22 19:49:49.084963 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-22 19:49:49.084973 | orchestrator | Sunday 22 June 2025 19:49:47 +0000 (0:00:00.146) 0:00:20.539 *********** 2025-06-22 19:49:49.084983 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.084992 | orchestrator | 2025-06-22 19:49:49.085002 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-22 19:49:49.085012 | orchestrator | Sunday 22 June 2025 19:49:47 +0000 (0:00:00.153) 0:00:20.692 *********** 2025-06-22 19:49:49.085021 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.085038 | orchestrator | 2025-06-22 19:49:49.085048 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-22 19:49:49.085057 | orchestrator | Sunday 22 June 2025 19:49:47 +0000 (0:00:00.137) 0:00:20.830 *********** 2025-06-22 19:49:49.085067 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.085077 | orchestrator | 2025-06-22 19:49:49.085086 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-22 19:49:49.085096 | orchestrator | Sunday 22 June 2025 19:49:47 +0000 (0:00:00.131) 0:00:20.962 *********** 2025-06-22 19:49:49.085106 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.085115 | orchestrator | 2025-06-22 19:49:49.085125 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-22 19:49:49.085135 | orchestrator | Sunday 22 June 2025 19:49:47 +0000 (0:00:00.137) 0:00:21.099 *********** 2025-06-22 19:49:49.085144 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.085154 | orchestrator | 2025-06-22 19:49:49.085164 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-22 19:49:49.085174 | orchestrator | Sunday 22 June 2025 19:49:47 +0000 (0:00:00.130) 0:00:21.230 *********** 2025-06-22 19:49:49.085209 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:49.085221 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:49.085231 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.085241 | orchestrator | 2025-06-22 19:49:49.085250 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-22 19:49:49.085260 | orchestrator | Sunday 22 June 2025 19:49:48 +0000 (0:00:00.143) 0:00:21.373 *********** 2025-06-22 19:49:49.085269 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:49.085279 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:49.085289 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.085298 | orchestrator | 2025-06-22 19:49:49.085308 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-22 19:49:49.085317 | orchestrator | Sunday 22 June 2025 19:49:48 +0000 (0:00:00.364) 0:00:21.738 *********** 2025-06-22 19:49:49.085327 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:49.085337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:49.085346 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.085356 | orchestrator | 2025-06-22 19:49:49.085381 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-22 19:49:49.085391 | orchestrator | Sunday 22 June 2025 19:49:48 +0000 (0:00:00.158) 0:00:21.897 *********** 2025-06-22 19:49:49.085400 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:49.085410 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:49.085420 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.085429 | orchestrator | 2025-06-22 19:49:49.085439 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-22 19:49:49.085449 | orchestrator | Sunday 22 June 2025 19:49:48 +0000 (0:00:00.153) 0:00:22.050 *********** 2025-06-22 19:49:49.085458 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:49.085478 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:49.085489 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:49.085498 | orchestrator | 2025-06-22 19:49:49.085508 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-22 19:49:49.085517 | orchestrator | Sunday 22 June 2025 19:49:48 +0000 (0:00:00.157) 0:00:22.208 *********** 2025-06-22 19:49:49.085527 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:49.085543 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:54.549728 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:54.549837 | orchestrator | 2025-06-22 19:49:54.549853 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-22 19:49:54.549867 | orchestrator | Sunday 22 June 2025 19:49:49 +0000 (0:00:00.150) 0:00:22.359 *********** 2025-06-22 19:49:54.549878 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:54.549891 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:54.549902 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:54.549913 | orchestrator | 2025-06-22 19:49:54.549925 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-22 19:49:54.549936 | orchestrator | Sunday 22 June 2025 19:49:49 +0000 (0:00:00.151) 0:00:22.510 *********** 2025-06-22 19:49:54.549947 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:54.549958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:54.549969 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:54.549983 | orchestrator | 2025-06-22 19:49:54.550001 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-22 19:49:54.550141 | orchestrator | Sunday 22 June 2025 19:49:49 +0000 (0:00:00.158) 0:00:22.668 *********** 2025-06-22 19:49:54.550162 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:49:54.550227 | orchestrator | 2025-06-22 19:49:54.550247 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-22 19:49:54.550263 | orchestrator | Sunday 22 June 2025 19:49:49 +0000 (0:00:00.518) 0:00:23.187 *********** 2025-06-22 19:49:54.550280 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:49:54.550297 | orchestrator | 2025-06-22 19:49:54.550314 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-22 19:49:54.550331 | orchestrator | Sunday 22 June 2025 19:49:50 +0000 (0:00:00.516) 0:00:23.703 *********** 2025-06-22 19:49:54.550348 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:49:54.550365 | orchestrator | 2025-06-22 19:49:54.550383 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-22 19:49:54.550400 | orchestrator | Sunday 22 June 2025 19:49:50 +0000 (0:00:00.145) 0:00:23.849 *********** 2025-06-22 19:49:54.550417 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'vg_name': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'}) 2025-06-22 19:49:54.550436 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'vg_name': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'}) 2025-06-22 19:49:54.550453 | orchestrator | 2025-06-22 19:49:54.550507 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-22 19:49:54.550526 | orchestrator | Sunday 22 June 2025 19:49:50 +0000 (0:00:00.169) 0:00:24.019 *********** 2025-06-22 19:49:54.550545 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:54.550558 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:54.550569 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:54.550580 | orchestrator | 2025-06-22 19:49:54.550591 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-22 19:49:54.550602 | orchestrator | Sunday 22 June 2025 19:49:50 +0000 (0:00:00.161) 0:00:24.180 *********** 2025-06-22 19:49:54.550613 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:54.550624 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:54.550635 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:54.550646 | orchestrator | 2025-06-22 19:49:54.550657 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-22 19:49:54.550668 | orchestrator | Sunday 22 June 2025 19:49:51 +0000 (0:00:00.357) 0:00:24.538 *********** 2025-06-22 19:49:54.550693 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'})  2025-06-22 19:49:54.550705 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'})  2025-06-22 19:49:54.550716 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:54.550726 | orchestrator | 2025-06-22 19:49:54.550737 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-22 19:49:54.550748 | orchestrator | Sunday 22 June 2025 19:49:51 +0000 (0:00:00.169) 0:00:24.707 *********** 2025-06-22 19:49:54.550758 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:49:54.550769 | orchestrator |  "lvm_report": { 2025-06-22 19:49:54.550780 | orchestrator |  "lv": [ 2025-06-22 19:49:54.550791 | orchestrator |  { 2025-06-22 19:49:54.550823 | orchestrator |  "lv_name": "osd-block-988500a7-3c26-5f89-b599-1c63900dc902", 2025-06-22 19:49:54.550836 | orchestrator |  "vg_name": "ceph-988500a7-3c26-5f89-b599-1c63900dc902" 2025-06-22 19:49:54.550846 | orchestrator |  }, 2025-06-22 19:49:54.550857 | orchestrator |  { 2025-06-22 19:49:54.550868 | orchestrator |  "lv_name": "osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9", 2025-06-22 19:49:54.550879 | orchestrator |  "vg_name": "ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9" 2025-06-22 19:49:54.550889 | orchestrator |  } 2025-06-22 19:49:54.550900 | orchestrator |  ], 2025-06-22 19:49:54.550911 | orchestrator |  "pv": [ 2025-06-22 19:49:54.550922 | orchestrator |  { 2025-06-22 19:49:54.550932 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-22 19:49:54.550943 | orchestrator |  "vg_name": "ceph-988500a7-3c26-5f89-b599-1c63900dc902" 2025-06-22 19:49:54.550954 | orchestrator |  }, 2025-06-22 19:49:54.550964 | orchestrator |  { 2025-06-22 19:49:54.550975 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-22 19:49:54.550986 | orchestrator |  "vg_name": "ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9" 2025-06-22 19:49:54.550996 | orchestrator |  } 2025-06-22 19:49:54.551007 | orchestrator |  ] 2025-06-22 19:49:54.551018 | orchestrator |  } 2025-06-22 19:49:54.551029 | orchestrator | } 2025-06-22 19:49:54.551041 | orchestrator | 2025-06-22 19:49:54.551051 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-22 19:49:54.551073 | orchestrator | 2025-06-22 19:49:54.551084 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:49:54.551094 | orchestrator | Sunday 22 June 2025 19:49:51 +0000 (0:00:00.301) 0:00:25.008 *********** 2025-06-22 19:49:54.551105 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-22 19:49:54.551116 | orchestrator | 2025-06-22 19:49:54.551126 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:49:54.551137 | orchestrator | Sunday 22 June 2025 19:49:51 +0000 (0:00:00.245) 0:00:25.253 *********** 2025-06-22 19:49:54.551148 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:49:54.551158 | orchestrator | 2025-06-22 19:49:54.551169 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:54.551202 | orchestrator | Sunday 22 June 2025 19:49:52 +0000 (0:00:00.236) 0:00:25.490 *********** 2025-06-22 19:49:54.551214 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-22 19:49:54.551225 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-22 19:49:54.551235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-22 19:49:54.551246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-22 19:49:54.551256 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-22 19:49:54.551267 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-22 19:49:54.551277 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-22 19:49:54.551288 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-22 19:49:54.551298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-22 19:49:54.551309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-22 19:49:54.551320 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-22 19:49:54.551330 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-22 19:49:54.551341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-22 19:49:54.551351 | orchestrator | 2025-06-22 19:49:54.551362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:54.551373 | orchestrator | Sunday 22 June 2025 19:49:52 +0000 (0:00:00.413) 0:00:25.904 *********** 2025-06-22 19:49:54.551383 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:49:54.551394 | orchestrator | 2025-06-22 19:49:54.551404 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:54.551415 | orchestrator | Sunday 22 June 2025 19:49:52 +0000 (0:00:00.204) 0:00:26.108 *********** 2025-06-22 19:49:54.551425 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:49:54.551436 | orchestrator | 2025-06-22 19:49:54.551447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:54.551457 | orchestrator | Sunday 22 June 2025 19:49:53 +0000 (0:00:00.197) 0:00:26.305 *********** 2025-06-22 19:49:54.551468 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:49:54.551478 | orchestrator | 2025-06-22 19:49:54.551489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:54.551500 | orchestrator | Sunday 22 June 2025 19:49:53 +0000 (0:00:00.189) 0:00:26.495 *********** 2025-06-22 19:49:54.551510 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:49:54.551521 | orchestrator | 2025-06-22 19:49:54.551531 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:54.551542 | orchestrator | Sunday 22 June 2025 19:49:53 +0000 (0:00:00.662) 0:00:27.158 *********** 2025-06-22 19:49:54.551552 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:49:54.551570 | orchestrator | 2025-06-22 19:49:54.551581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:54.551591 | orchestrator | Sunday 22 June 2025 19:49:54 +0000 (0:00:00.205) 0:00:27.364 *********** 2025-06-22 19:49:54.551602 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:49:54.551612 | orchestrator | 2025-06-22 19:49:54.551623 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:54.551634 | orchestrator | Sunday 22 June 2025 19:49:54 +0000 (0:00:00.240) 0:00:27.604 *********** 2025-06-22 19:49:54.551644 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:49:54.551655 | orchestrator | 2025-06-22 19:49:54.551673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:04.996678 | orchestrator | Sunday 22 June 2025 19:49:54 +0000 (0:00:00.220) 0:00:27.825 *********** 2025-06-22 19:50:04.996794 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.996810 | orchestrator | 2025-06-22 19:50:04.996823 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:04.996835 | orchestrator | Sunday 22 June 2025 19:49:54 +0000 (0:00:00.221) 0:00:28.047 *********** 2025-06-22 19:50:04.996847 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da) 2025-06-22 19:50:04.996859 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da) 2025-06-22 19:50:04.996870 | orchestrator | 2025-06-22 19:50:04.996881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:04.996892 | orchestrator | Sunday 22 June 2025 19:49:55 +0000 (0:00:00.452) 0:00:28.499 *********** 2025-06-22 19:50:04.996903 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9d381e45-09fd-4a20-ab1c-6f33bb7ad47a) 2025-06-22 19:50:04.996914 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9d381e45-09fd-4a20-ab1c-6f33bb7ad47a) 2025-06-22 19:50:04.996925 | orchestrator | 2025-06-22 19:50:04.996936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:04.996947 | orchestrator | Sunday 22 June 2025 19:49:55 +0000 (0:00:00.438) 0:00:28.938 *********** 2025-06-22 19:50:04.996958 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ca04149b-3774-4fe5-a4a8-e7007e740a3b) 2025-06-22 19:50:04.996969 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ca04149b-3774-4fe5-a4a8-e7007e740a3b) 2025-06-22 19:50:04.996980 | orchestrator | 2025-06-22 19:50:04.996991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:04.997002 | orchestrator | Sunday 22 June 2025 19:49:56 +0000 (0:00:00.460) 0:00:29.398 *********** 2025-06-22 19:50:04.997012 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_10758f5a-a518-4894-b68c-79c541e050d1) 2025-06-22 19:50:04.997043 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_10758f5a-a518-4894-b68c-79c541e050d1) 2025-06-22 19:50:04.997055 | orchestrator | 2025-06-22 19:50:04.997066 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:04.997077 | orchestrator | Sunday 22 June 2025 19:49:56 +0000 (0:00:00.426) 0:00:29.825 *********** 2025-06-22 19:50:04.997088 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:50:04.997099 | orchestrator | 2025-06-22 19:50:04.997111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997122 | orchestrator | Sunday 22 June 2025 19:49:56 +0000 (0:00:00.331) 0:00:30.157 *********** 2025-06-22 19:50:04.997133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-22 19:50:04.997145 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-22 19:50:04.997155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-22 19:50:04.997166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-22 19:50:04.997239 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-22 19:50:04.997263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-22 19:50:04.997280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-22 19:50:04.997299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-22 19:50:04.997310 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-22 19:50:04.997321 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-22 19:50:04.997331 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-22 19:50:04.997342 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-22 19:50:04.997353 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-22 19:50:04.997364 | orchestrator | 2025-06-22 19:50:04.997375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997391 | orchestrator | Sunday 22 June 2025 19:49:57 +0000 (0:00:00.636) 0:00:30.793 *********** 2025-06-22 19:50:04.997402 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.997413 | orchestrator | 2025-06-22 19:50:04.997425 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997436 | orchestrator | Sunday 22 June 2025 19:49:57 +0000 (0:00:00.214) 0:00:31.007 *********** 2025-06-22 19:50:04.997446 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.997457 | orchestrator | 2025-06-22 19:50:04.997468 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997479 | orchestrator | Sunday 22 June 2025 19:49:57 +0000 (0:00:00.216) 0:00:31.224 *********** 2025-06-22 19:50:04.997490 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.997500 | orchestrator | 2025-06-22 19:50:04.997511 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997522 | orchestrator | Sunday 22 June 2025 19:49:58 +0000 (0:00:00.203) 0:00:31.428 *********** 2025-06-22 19:50:04.997533 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.997544 | orchestrator | 2025-06-22 19:50:04.997573 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997585 | orchestrator | Sunday 22 June 2025 19:49:58 +0000 (0:00:00.201) 0:00:31.629 *********** 2025-06-22 19:50:04.997596 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.997606 | orchestrator | 2025-06-22 19:50:04.997617 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997633 | orchestrator | Sunday 22 June 2025 19:49:58 +0000 (0:00:00.212) 0:00:31.842 *********** 2025-06-22 19:50:04.997651 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.997668 | orchestrator | 2025-06-22 19:50:04.997685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997701 | orchestrator | Sunday 22 June 2025 19:49:58 +0000 (0:00:00.267) 0:00:32.109 *********** 2025-06-22 19:50:04.997717 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.997735 | orchestrator | 2025-06-22 19:50:04.997753 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997773 | orchestrator | Sunday 22 June 2025 19:49:59 +0000 (0:00:00.195) 0:00:32.305 *********** 2025-06-22 19:50:04.997790 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.997806 | orchestrator | 2025-06-22 19:50:04.997817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997828 | orchestrator | Sunday 22 June 2025 19:49:59 +0000 (0:00:00.219) 0:00:32.524 *********** 2025-06-22 19:50:04.997839 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-22 19:50:04.997849 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-22 19:50:04.997871 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-22 19:50:04.997882 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-22 19:50:04.997893 | orchestrator | 2025-06-22 19:50:04.997903 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997914 | orchestrator | Sunday 22 June 2025 19:50:00 +0000 (0:00:00.845) 0:00:33.370 *********** 2025-06-22 19:50:04.997925 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.997935 | orchestrator | 2025-06-22 19:50:04.997946 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997957 | orchestrator | Sunday 22 June 2025 19:50:00 +0000 (0:00:00.223) 0:00:33.594 *********** 2025-06-22 19:50:04.997967 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.997978 | orchestrator | 2025-06-22 19:50:04.997988 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.997999 | orchestrator | Sunday 22 June 2025 19:50:00 +0000 (0:00:00.198) 0:00:33.792 *********** 2025-06-22 19:50:04.998010 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.998091 | orchestrator | 2025-06-22 19:50:04.998103 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:04.998114 | orchestrator | Sunday 22 June 2025 19:50:01 +0000 (0:00:00.669) 0:00:34.462 *********** 2025-06-22 19:50:04.998125 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.998135 | orchestrator | 2025-06-22 19:50:04.998147 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-22 19:50:04.998158 | orchestrator | Sunday 22 June 2025 19:50:01 +0000 (0:00:00.202) 0:00:34.665 *********** 2025-06-22 19:50:04.998169 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.998179 | orchestrator | 2025-06-22 19:50:04.998213 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-22 19:50:04.998226 | orchestrator | Sunday 22 June 2025 19:50:01 +0000 (0:00:00.144) 0:00:34.810 *********** 2025-06-22 19:50:04.998245 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '809c9636-3d83-5d3b-8a98-356a4387ae79'}}) 2025-06-22 19:50:04.998263 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'}}) 2025-06-22 19:50:04.998280 | orchestrator | 2025-06-22 19:50:04.998298 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-22 19:50:04.998316 | orchestrator | Sunday 22 June 2025 19:50:01 +0000 (0:00:00.191) 0:00:35.001 *********** 2025-06-22 19:50:04.998335 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'}) 2025-06-22 19:50:04.998354 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'}) 2025-06-22 19:50:04.998374 | orchestrator | 2025-06-22 19:50:04.998393 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-22 19:50:04.998411 | orchestrator | Sunday 22 June 2025 19:50:03 +0000 (0:00:01.860) 0:00:36.862 *********** 2025-06-22 19:50:04.998435 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:04.998449 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:04.998460 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:04.998471 | orchestrator | 2025-06-22 19:50:04.998481 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-22 19:50:04.998492 | orchestrator | Sunday 22 June 2025 19:50:03 +0000 (0:00:00.166) 0:00:37.028 *********** 2025-06-22 19:50:04.998503 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'}) 2025-06-22 19:50:04.998523 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'}) 2025-06-22 19:50:04.998534 | orchestrator | 2025-06-22 19:50:04.998557 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-22 19:50:10.617487 | orchestrator | Sunday 22 June 2025 19:50:04 +0000 (0:00:01.239) 0:00:38.267 *********** 2025-06-22 19:50:10.617602 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:10.617620 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:10.617633 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.617645 | orchestrator | 2025-06-22 19:50:10.617657 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-22 19:50:10.617668 | orchestrator | Sunday 22 June 2025 19:50:05 +0000 (0:00:00.150) 0:00:38.417 *********** 2025-06-22 19:50:10.617679 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.617690 | orchestrator | 2025-06-22 19:50:10.617702 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-22 19:50:10.617713 | orchestrator | Sunday 22 June 2025 19:50:05 +0000 (0:00:00.136) 0:00:38.554 *********** 2025-06-22 19:50:10.617724 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:10.617735 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:10.617746 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.617756 | orchestrator | 2025-06-22 19:50:10.617767 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-22 19:50:10.617778 | orchestrator | Sunday 22 June 2025 19:50:05 +0000 (0:00:00.145) 0:00:38.700 *********** 2025-06-22 19:50:10.617789 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.617800 | orchestrator | 2025-06-22 19:50:10.617811 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-22 19:50:10.617821 | orchestrator | Sunday 22 June 2025 19:50:05 +0000 (0:00:00.139) 0:00:38.839 *********** 2025-06-22 19:50:10.617833 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:10.617844 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:10.617855 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.617866 | orchestrator | 2025-06-22 19:50:10.617877 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-22 19:50:10.617887 | orchestrator | Sunday 22 June 2025 19:50:05 +0000 (0:00:00.155) 0:00:38.994 *********** 2025-06-22 19:50:10.617898 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.617909 | orchestrator | 2025-06-22 19:50:10.617920 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-22 19:50:10.617931 | orchestrator | Sunday 22 June 2025 19:50:06 +0000 (0:00:00.349) 0:00:39.344 *********** 2025-06-22 19:50:10.617942 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:10.617953 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:10.617964 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.617975 | orchestrator | 2025-06-22 19:50:10.617985 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-22 19:50:10.618113 | orchestrator | Sunday 22 June 2025 19:50:06 +0000 (0:00:00.161) 0:00:39.505 *********** 2025-06-22 19:50:10.618129 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:50:10.618143 | orchestrator | 2025-06-22 19:50:10.618156 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-22 19:50:10.618168 | orchestrator | Sunday 22 June 2025 19:50:06 +0000 (0:00:00.142) 0:00:39.648 *********** 2025-06-22 19:50:10.618181 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:10.618217 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:10.618232 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.618244 | orchestrator | 2025-06-22 19:50:10.618256 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-22 19:50:10.618269 | orchestrator | Sunday 22 June 2025 19:50:06 +0000 (0:00:00.158) 0:00:39.807 *********** 2025-06-22 19:50:10.618281 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:10.618293 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:10.618304 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.618316 | orchestrator | 2025-06-22 19:50:10.618328 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-22 19:50:10.618341 | orchestrator | Sunday 22 June 2025 19:50:06 +0000 (0:00:00.155) 0:00:39.963 *********** 2025-06-22 19:50:10.618373 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:10.618385 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:10.618396 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.618406 | orchestrator | 2025-06-22 19:50:10.618417 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-22 19:50:10.618428 | orchestrator | Sunday 22 June 2025 19:50:06 +0000 (0:00:00.141) 0:00:40.104 *********** 2025-06-22 19:50:10.618439 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.618449 | orchestrator | 2025-06-22 19:50:10.618460 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-22 19:50:10.618471 | orchestrator | Sunday 22 June 2025 19:50:06 +0000 (0:00:00.131) 0:00:40.236 *********** 2025-06-22 19:50:10.618481 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.618492 | orchestrator | 2025-06-22 19:50:10.618503 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-22 19:50:10.618573 | orchestrator | Sunday 22 June 2025 19:50:07 +0000 (0:00:00.125) 0:00:40.362 *********** 2025-06-22 19:50:10.618587 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.618598 | orchestrator | 2025-06-22 19:50:10.618608 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-22 19:50:10.618619 | orchestrator | Sunday 22 June 2025 19:50:07 +0000 (0:00:00.154) 0:00:40.516 *********** 2025-06-22 19:50:10.618631 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:50:10.618642 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-22 19:50:10.618653 | orchestrator | } 2025-06-22 19:50:10.618664 | orchestrator | 2025-06-22 19:50:10.618675 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-22 19:50:10.618686 | orchestrator | Sunday 22 June 2025 19:50:07 +0000 (0:00:00.167) 0:00:40.684 *********** 2025-06-22 19:50:10.618697 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:50:10.618708 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-22 19:50:10.618719 | orchestrator | } 2025-06-22 19:50:10.618741 | orchestrator | 2025-06-22 19:50:10.618752 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-22 19:50:10.618763 | orchestrator | Sunday 22 June 2025 19:50:07 +0000 (0:00:00.142) 0:00:40.826 *********** 2025-06-22 19:50:10.618774 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:50:10.618784 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-22 19:50:10.618795 | orchestrator | } 2025-06-22 19:50:10.618806 | orchestrator | 2025-06-22 19:50:10.618817 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-22 19:50:10.618828 | orchestrator | Sunday 22 June 2025 19:50:07 +0000 (0:00:00.146) 0:00:40.972 *********** 2025-06-22 19:50:10.618838 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:50:10.618849 | orchestrator | 2025-06-22 19:50:10.618860 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-22 19:50:10.618918 | orchestrator | Sunday 22 June 2025 19:50:08 +0000 (0:00:00.728) 0:00:41.701 *********** 2025-06-22 19:50:10.618933 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:50:10.618943 | orchestrator | 2025-06-22 19:50:10.618954 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-22 19:50:10.618965 | orchestrator | Sunday 22 June 2025 19:50:08 +0000 (0:00:00.559) 0:00:42.260 *********** 2025-06-22 19:50:10.618976 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:50:10.618987 | orchestrator | 2025-06-22 19:50:10.618998 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-22 19:50:10.619008 | orchestrator | Sunday 22 June 2025 19:50:09 +0000 (0:00:00.571) 0:00:42.832 *********** 2025-06-22 19:50:10.619019 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:50:10.619030 | orchestrator | 2025-06-22 19:50:10.619040 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-22 19:50:10.619051 | orchestrator | Sunday 22 June 2025 19:50:09 +0000 (0:00:00.159) 0:00:42.991 *********** 2025-06-22 19:50:10.619062 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.619073 | orchestrator | 2025-06-22 19:50:10.619083 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-22 19:50:10.619111 | orchestrator | Sunday 22 June 2025 19:50:09 +0000 (0:00:00.110) 0:00:43.101 *********** 2025-06-22 19:50:10.619122 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.619133 | orchestrator | 2025-06-22 19:50:10.619144 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-22 19:50:10.619155 | orchestrator | Sunday 22 June 2025 19:50:09 +0000 (0:00:00.103) 0:00:43.205 *********** 2025-06-22 19:50:10.619165 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:50:10.619176 | orchestrator |  "vgs_report": { 2025-06-22 19:50:10.619187 | orchestrator |  "vg": [] 2025-06-22 19:50:10.619242 | orchestrator |  } 2025-06-22 19:50:10.619255 | orchestrator | } 2025-06-22 19:50:10.619266 | orchestrator | 2025-06-22 19:50:10.619276 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-22 19:50:10.619294 | orchestrator | Sunday 22 June 2025 19:50:10 +0000 (0:00:00.143) 0:00:43.349 *********** 2025-06-22 19:50:10.619305 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.619316 | orchestrator | 2025-06-22 19:50:10.619326 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-22 19:50:10.619337 | orchestrator | Sunday 22 June 2025 19:50:10 +0000 (0:00:00.128) 0:00:43.478 *********** 2025-06-22 19:50:10.619348 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.619359 | orchestrator | 2025-06-22 19:50:10.619369 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-22 19:50:10.619380 | orchestrator | Sunday 22 June 2025 19:50:10 +0000 (0:00:00.139) 0:00:43.617 *********** 2025-06-22 19:50:10.619391 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.619401 | orchestrator | 2025-06-22 19:50:10.619412 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-22 19:50:10.619423 | orchestrator | Sunday 22 June 2025 19:50:10 +0000 (0:00:00.137) 0:00:43.755 *********** 2025-06-22 19:50:10.619434 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:10.619454 | orchestrator | 2025-06-22 19:50:10.619464 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-22 19:50:10.619484 | orchestrator | Sunday 22 June 2025 19:50:10 +0000 (0:00:00.135) 0:00:43.891 *********** 2025-06-22 19:50:15.410662 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.410793 | orchestrator | 2025-06-22 19:50:15.410810 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-22 19:50:15.410823 | orchestrator | Sunday 22 June 2025 19:50:10 +0000 (0:00:00.159) 0:00:44.051 *********** 2025-06-22 19:50:15.410834 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.410845 | orchestrator | 2025-06-22 19:50:15.410857 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-22 19:50:15.410868 | orchestrator | Sunday 22 June 2025 19:50:11 +0000 (0:00:00.360) 0:00:44.411 *********** 2025-06-22 19:50:15.410879 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.410901 | orchestrator | 2025-06-22 19:50:15.410924 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-22 19:50:15.410936 | orchestrator | Sunday 22 June 2025 19:50:11 +0000 (0:00:00.139) 0:00:44.550 *********** 2025-06-22 19:50:15.410947 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.410957 | orchestrator | 2025-06-22 19:50:15.410968 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-22 19:50:15.410979 | orchestrator | Sunday 22 June 2025 19:50:11 +0000 (0:00:00.161) 0:00:44.711 *********** 2025-06-22 19:50:15.410990 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411000 | orchestrator | 2025-06-22 19:50:15.411013 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-22 19:50:15.411024 | orchestrator | Sunday 22 June 2025 19:50:11 +0000 (0:00:00.163) 0:00:44.875 *********** 2025-06-22 19:50:15.411035 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411046 | orchestrator | 2025-06-22 19:50:15.411057 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-22 19:50:15.411067 | orchestrator | Sunday 22 June 2025 19:50:11 +0000 (0:00:00.130) 0:00:45.005 *********** 2025-06-22 19:50:15.411078 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411088 | orchestrator | 2025-06-22 19:50:15.411100 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-22 19:50:15.411110 | orchestrator | Sunday 22 June 2025 19:50:11 +0000 (0:00:00.131) 0:00:45.137 *********** 2025-06-22 19:50:15.411121 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411131 | orchestrator | 2025-06-22 19:50:15.411142 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-22 19:50:15.411152 | orchestrator | Sunday 22 June 2025 19:50:11 +0000 (0:00:00.131) 0:00:45.269 *********** 2025-06-22 19:50:15.411163 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411173 | orchestrator | 2025-06-22 19:50:15.411184 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-22 19:50:15.411215 | orchestrator | Sunday 22 June 2025 19:50:12 +0000 (0:00:00.133) 0:00:45.402 *********** 2025-06-22 19:50:15.411227 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411239 | orchestrator | 2025-06-22 19:50:15.411251 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-22 19:50:15.411264 | orchestrator | Sunday 22 June 2025 19:50:12 +0000 (0:00:00.133) 0:00:45.536 *********** 2025-06-22 19:50:15.411277 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:15.411291 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:15.411302 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411314 | orchestrator | 2025-06-22 19:50:15.411325 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-22 19:50:15.411338 | orchestrator | Sunday 22 June 2025 19:50:12 +0000 (0:00:00.163) 0:00:45.700 *********** 2025-06-22 19:50:15.411373 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:15.411386 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:15.411399 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411410 | orchestrator | 2025-06-22 19:50:15.411422 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-22 19:50:15.411434 | orchestrator | Sunday 22 June 2025 19:50:12 +0000 (0:00:00.142) 0:00:45.842 *********** 2025-06-22 19:50:15.411446 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:15.411475 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:15.411487 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411499 | orchestrator | 2025-06-22 19:50:15.411511 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-22 19:50:15.411523 | orchestrator | Sunday 22 June 2025 19:50:12 +0000 (0:00:00.155) 0:00:45.997 *********** 2025-06-22 19:50:15.411535 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:15.411547 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:15.411559 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411571 | orchestrator | 2025-06-22 19:50:15.411582 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-22 19:50:15.411609 | orchestrator | Sunday 22 June 2025 19:50:13 +0000 (0:00:00.366) 0:00:46.364 *********** 2025-06-22 19:50:15.411621 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:15.411632 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:15.411644 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411655 | orchestrator | 2025-06-22 19:50:15.411666 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-22 19:50:15.411677 | orchestrator | Sunday 22 June 2025 19:50:13 +0000 (0:00:00.160) 0:00:46.524 *********** 2025-06-22 19:50:15.411688 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:15.411699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:15.411709 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411720 | orchestrator | 2025-06-22 19:50:15.411730 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-22 19:50:15.411741 | orchestrator | Sunday 22 June 2025 19:50:13 +0000 (0:00:00.156) 0:00:46.681 *********** 2025-06-22 19:50:15.411752 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:15.411762 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:15.411773 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411784 | orchestrator | 2025-06-22 19:50:15.411794 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-22 19:50:15.411813 | orchestrator | Sunday 22 June 2025 19:50:13 +0000 (0:00:00.154) 0:00:46.835 *********** 2025-06-22 19:50:15.411823 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:15.411834 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:15.411845 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.411855 | orchestrator | 2025-06-22 19:50:15.411866 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-22 19:50:15.411877 | orchestrator | Sunday 22 June 2025 19:50:13 +0000 (0:00:00.152) 0:00:46.987 *********** 2025-06-22 19:50:15.411887 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:50:15.411898 | orchestrator | 2025-06-22 19:50:15.411909 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-22 19:50:15.411920 | orchestrator | Sunday 22 June 2025 19:50:14 +0000 (0:00:00.533) 0:00:47.521 *********** 2025-06-22 19:50:15.411930 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:50:15.411940 | orchestrator | 2025-06-22 19:50:15.411951 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-22 19:50:15.411962 | orchestrator | Sunday 22 June 2025 19:50:14 +0000 (0:00:00.513) 0:00:48.034 *********** 2025-06-22 19:50:15.411972 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:50:15.411983 | orchestrator | 2025-06-22 19:50:15.411994 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-22 19:50:15.412004 | orchestrator | Sunday 22 June 2025 19:50:14 +0000 (0:00:00.149) 0:00:48.184 *********** 2025-06-22 19:50:15.412015 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'vg_name': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'}) 2025-06-22 19:50:15.412027 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'vg_name': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'}) 2025-06-22 19:50:15.412038 | orchestrator | 2025-06-22 19:50:15.412048 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-22 19:50:15.412059 | orchestrator | Sunday 22 June 2025 19:50:15 +0000 (0:00:00.180) 0:00:48.364 *********** 2025-06-22 19:50:15.412075 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:15.412086 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:15.412097 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:15.412108 | orchestrator | 2025-06-22 19:50:15.412118 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-22 19:50:15.412129 | orchestrator | Sunday 22 June 2025 19:50:15 +0000 (0:00:00.161) 0:00:48.526 *********** 2025-06-22 19:50:15.412140 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:15.412151 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:15.412174 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:21.034570 | orchestrator | 2025-06-22 19:50:21.034663 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-22 19:50:21.034680 | orchestrator | Sunday 22 June 2025 19:50:15 +0000 (0:00:00.160) 0:00:48.686 *********** 2025-06-22 19:50:21.034693 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'})  2025-06-22 19:50:21.034705 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'})  2025-06-22 19:50:21.034737 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:21.034749 | orchestrator | 2025-06-22 19:50:21.034760 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-22 19:50:21.034775 | orchestrator | Sunday 22 June 2025 19:50:15 +0000 (0:00:00.159) 0:00:48.846 *********** 2025-06-22 19:50:21.034786 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:50:21.034797 | orchestrator |  "lvm_report": { 2025-06-22 19:50:21.034809 | orchestrator |  "lv": [ 2025-06-22 19:50:21.034820 | orchestrator |  { 2025-06-22 19:50:21.034831 | orchestrator |  "lv_name": "osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e", 2025-06-22 19:50:21.034842 | orchestrator |  "vg_name": "ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e" 2025-06-22 19:50:21.034853 | orchestrator |  }, 2025-06-22 19:50:21.034864 | orchestrator |  { 2025-06-22 19:50:21.034874 | orchestrator |  "lv_name": "osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79", 2025-06-22 19:50:21.034885 | orchestrator |  "vg_name": "ceph-809c9636-3d83-5d3b-8a98-356a4387ae79" 2025-06-22 19:50:21.034896 | orchestrator |  } 2025-06-22 19:50:21.034906 | orchestrator |  ], 2025-06-22 19:50:21.034917 | orchestrator |  "pv": [ 2025-06-22 19:50:21.034927 | orchestrator |  { 2025-06-22 19:50:21.034938 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-22 19:50:21.034949 | orchestrator |  "vg_name": "ceph-809c9636-3d83-5d3b-8a98-356a4387ae79" 2025-06-22 19:50:21.034959 | orchestrator |  }, 2025-06-22 19:50:21.034970 | orchestrator |  { 2025-06-22 19:50:21.034980 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-22 19:50:21.034991 | orchestrator |  "vg_name": "ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e" 2025-06-22 19:50:21.035002 | orchestrator |  } 2025-06-22 19:50:21.035012 | orchestrator |  ] 2025-06-22 19:50:21.035023 | orchestrator |  } 2025-06-22 19:50:21.035034 | orchestrator | } 2025-06-22 19:50:21.035045 | orchestrator | 2025-06-22 19:50:21.035056 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-22 19:50:21.035067 | orchestrator | 2025-06-22 19:50:21.035077 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:50:21.035088 | orchestrator | Sunday 22 June 2025 19:50:16 +0000 (0:00:00.478) 0:00:49.325 *********** 2025-06-22 19:50:21.035099 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-22 19:50:21.035112 | orchestrator | 2025-06-22 19:50:21.035124 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:50:21.035137 | orchestrator | Sunday 22 June 2025 19:50:16 +0000 (0:00:00.243) 0:00:49.568 *********** 2025-06-22 19:50:21.035149 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:50:21.035160 | orchestrator | 2025-06-22 19:50:21.035173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.035185 | orchestrator | Sunday 22 June 2025 19:50:16 +0000 (0:00:00.235) 0:00:49.804 *********** 2025-06-22 19:50:21.035197 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-22 19:50:21.035249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-22 19:50:21.035262 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-22 19:50:21.035274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-22 19:50:21.035287 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-22 19:50:21.035299 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-22 19:50:21.035312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-22 19:50:21.035324 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-22 19:50:21.035345 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-22 19:50:21.035358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-22 19:50:21.035370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-22 19:50:21.035383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-22 19:50:21.035395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-22 19:50:21.035407 | orchestrator | 2025-06-22 19:50:21.035419 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.035431 | orchestrator | Sunday 22 June 2025 19:50:16 +0000 (0:00:00.410) 0:00:50.214 *********** 2025-06-22 19:50:21.035443 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:21.035457 | orchestrator | 2025-06-22 19:50:21.035469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.035480 | orchestrator | Sunday 22 June 2025 19:50:17 +0000 (0:00:00.203) 0:00:50.418 *********** 2025-06-22 19:50:21.035490 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:21.035501 | orchestrator | 2025-06-22 19:50:21.035550 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.035579 | orchestrator | Sunday 22 June 2025 19:50:17 +0000 (0:00:00.219) 0:00:50.638 *********** 2025-06-22 19:50:21.035591 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:21.035602 | orchestrator | 2025-06-22 19:50:21.035613 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.035624 | orchestrator | Sunday 22 June 2025 19:50:17 +0000 (0:00:00.202) 0:00:50.841 *********** 2025-06-22 19:50:21.035634 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:21.035645 | orchestrator | 2025-06-22 19:50:21.035656 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.035667 | orchestrator | Sunday 22 June 2025 19:50:17 +0000 (0:00:00.185) 0:00:51.026 *********** 2025-06-22 19:50:21.035678 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:21.035688 | orchestrator | 2025-06-22 19:50:21.035699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.035710 | orchestrator | Sunday 22 June 2025 19:50:17 +0000 (0:00:00.181) 0:00:51.208 *********** 2025-06-22 19:50:21.035721 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:21.035732 | orchestrator | 2025-06-22 19:50:21.035742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.035753 | orchestrator | Sunday 22 June 2025 19:50:18 +0000 (0:00:00.471) 0:00:51.679 *********** 2025-06-22 19:50:21.035764 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:21.035775 | orchestrator | 2025-06-22 19:50:21.035786 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.035796 | orchestrator | Sunday 22 June 2025 19:50:18 +0000 (0:00:00.177) 0:00:51.857 *********** 2025-06-22 19:50:21.035807 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:21.035818 | orchestrator | 2025-06-22 19:50:21.035829 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.035840 | orchestrator | Sunday 22 June 2025 19:50:18 +0000 (0:00:00.174) 0:00:52.031 *********** 2025-06-22 19:50:21.035850 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6) 2025-06-22 19:50:21.035862 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6) 2025-06-22 19:50:21.035873 | orchestrator | 2025-06-22 19:50:21.035884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.035895 | orchestrator | Sunday 22 June 2025 19:50:19 +0000 (0:00:00.383) 0:00:52.415 *********** 2025-06-22 19:50:21.035905 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_986f77d9-7eeb-491e-bdbe-4c9e8ad066d2) 2025-06-22 19:50:21.035923 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_986f77d9-7eeb-491e-bdbe-4c9e8ad066d2) 2025-06-22 19:50:21.035934 | orchestrator | 2025-06-22 19:50:21.035945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.035956 | orchestrator | Sunday 22 June 2025 19:50:19 +0000 (0:00:00.393) 0:00:52.808 *********** 2025-06-22 19:50:21.035967 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_f12434e6-788f-4ffb-a434-d641146d84ae) 2025-06-22 19:50:21.035977 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_f12434e6-788f-4ffb-a434-d641146d84ae) 2025-06-22 19:50:21.035988 | orchestrator | 2025-06-22 19:50:21.035999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.036010 | orchestrator | Sunday 22 June 2025 19:50:19 +0000 (0:00:00.415) 0:00:53.223 *********** 2025-06-22 19:50:21.036021 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b3712533-4ba6-4a13-8d22-1afd9c8ce6f2) 2025-06-22 19:50:21.036032 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b3712533-4ba6-4a13-8d22-1afd9c8ce6f2) 2025-06-22 19:50:21.036043 | orchestrator | 2025-06-22 19:50:21.036053 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:50:21.036064 | orchestrator | Sunday 22 June 2025 19:50:20 +0000 (0:00:00.408) 0:00:53.631 *********** 2025-06-22 19:50:21.036075 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:50:21.036086 | orchestrator | 2025-06-22 19:50:21.036097 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:21.036107 | orchestrator | Sunday 22 June 2025 19:50:20 +0000 (0:00:00.310) 0:00:53.941 *********** 2025-06-22 19:50:21.036118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-22 19:50:21.036129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-22 19:50:21.036139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-22 19:50:21.036155 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-22 19:50:21.036166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-22 19:50:21.036176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-22 19:50:21.036187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-22 19:50:21.036198 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-22 19:50:21.036231 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-22 19:50:21.036242 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-22 19:50:21.036253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-22 19:50:21.036270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-22 19:50:29.231080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-22 19:50:29.231175 | orchestrator | 2025-06-22 19:50:29.231192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231204 | orchestrator | Sunday 22 June 2025 19:50:21 +0000 (0:00:00.362) 0:00:54.304 *********** 2025-06-22 19:50:29.231274 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231287 | orchestrator | 2025-06-22 19:50:29.231298 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231309 | orchestrator | Sunday 22 June 2025 19:50:21 +0000 (0:00:00.202) 0:00:54.506 *********** 2025-06-22 19:50:29.231320 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231331 | orchestrator | 2025-06-22 19:50:29.231342 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231376 | orchestrator | Sunday 22 June 2025 19:50:21 +0000 (0:00:00.171) 0:00:54.678 *********** 2025-06-22 19:50:29.231388 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231399 | orchestrator | 2025-06-22 19:50:29.231409 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231420 | orchestrator | Sunday 22 June 2025 19:50:21 +0000 (0:00:00.490) 0:00:55.168 *********** 2025-06-22 19:50:29.231431 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231442 | orchestrator | 2025-06-22 19:50:29.231453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231463 | orchestrator | Sunday 22 June 2025 19:50:22 +0000 (0:00:00.189) 0:00:55.357 *********** 2025-06-22 19:50:29.231474 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231485 | orchestrator | 2025-06-22 19:50:29.231496 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231507 | orchestrator | Sunday 22 June 2025 19:50:22 +0000 (0:00:00.187) 0:00:55.545 *********** 2025-06-22 19:50:29.231517 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231528 | orchestrator | 2025-06-22 19:50:29.231540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231551 | orchestrator | Sunday 22 June 2025 19:50:22 +0000 (0:00:00.187) 0:00:55.732 *********** 2025-06-22 19:50:29.231562 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231572 | orchestrator | 2025-06-22 19:50:29.231583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231594 | orchestrator | Sunday 22 June 2025 19:50:22 +0000 (0:00:00.181) 0:00:55.913 *********** 2025-06-22 19:50:29.231605 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231616 | orchestrator | 2025-06-22 19:50:29.231627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231639 | orchestrator | Sunday 22 June 2025 19:50:22 +0000 (0:00:00.191) 0:00:56.105 *********** 2025-06-22 19:50:29.231651 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-22 19:50:29.231663 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-22 19:50:29.231675 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-22 19:50:29.231687 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-22 19:50:29.231700 | orchestrator | 2025-06-22 19:50:29.231712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231724 | orchestrator | Sunday 22 June 2025 19:50:23 +0000 (0:00:00.602) 0:00:56.708 *********** 2025-06-22 19:50:29.231736 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231748 | orchestrator | 2025-06-22 19:50:29.231760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231772 | orchestrator | Sunday 22 June 2025 19:50:23 +0000 (0:00:00.179) 0:00:56.888 *********** 2025-06-22 19:50:29.231784 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231797 | orchestrator | 2025-06-22 19:50:29.231809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231821 | orchestrator | Sunday 22 June 2025 19:50:23 +0000 (0:00:00.174) 0:00:57.062 *********** 2025-06-22 19:50:29.231834 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231845 | orchestrator | 2025-06-22 19:50:29.231857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:50:29.231869 | orchestrator | Sunday 22 June 2025 19:50:23 +0000 (0:00:00.188) 0:00:57.250 *********** 2025-06-22 19:50:29.231882 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231894 | orchestrator | 2025-06-22 19:50:29.231906 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-22 19:50:29.231918 | orchestrator | Sunday 22 June 2025 19:50:24 +0000 (0:00:00.188) 0:00:57.439 *********** 2025-06-22 19:50:29.231930 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.231943 | orchestrator | 2025-06-22 19:50:29.231955 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-22 19:50:29.231974 | orchestrator | Sunday 22 June 2025 19:50:24 +0000 (0:00:00.117) 0:00:57.557 *********** 2025-06-22 19:50:29.232000 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b2f14396-315c-50f9-a6a7-8817318b41c3'}}) 2025-06-22 19:50:29.232012 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '60bbbdec-af53-55ad-b293-31f676104815'}}) 2025-06-22 19:50:29.232023 | orchestrator | 2025-06-22 19:50:29.232035 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-22 19:50:29.232046 | orchestrator | Sunday 22 June 2025 19:50:24 +0000 (0:00:00.314) 0:00:57.872 *********** 2025-06-22 19:50:29.232057 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'}) 2025-06-22 19:50:29.232075 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'}) 2025-06-22 19:50:29.232094 | orchestrator | 2025-06-22 19:50:29.232114 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-22 19:50:29.232153 | orchestrator | Sunday 22 June 2025 19:50:26 +0000 (0:00:01.863) 0:00:59.735 *********** 2025-06-22 19:50:29.232173 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:29.232190 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:29.232228 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.232248 | orchestrator | 2025-06-22 19:50:29.232263 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-22 19:50:29.232280 | orchestrator | Sunday 22 June 2025 19:50:26 +0000 (0:00:00.149) 0:00:59.885 *********** 2025-06-22 19:50:29.232296 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'}) 2025-06-22 19:50:29.232314 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'}) 2025-06-22 19:50:29.232330 | orchestrator | 2025-06-22 19:50:29.232347 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-22 19:50:29.232364 | orchestrator | Sunday 22 June 2025 19:50:27 +0000 (0:00:01.264) 0:01:01.149 *********** 2025-06-22 19:50:29.232381 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:29.232399 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:29.232416 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.232433 | orchestrator | 2025-06-22 19:50:29.232449 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-22 19:50:29.232466 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.147) 0:01:01.297 *********** 2025-06-22 19:50:29.232483 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.232500 | orchestrator | 2025-06-22 19:50:29.232519 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-22 19:50:29.232536 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.131) 0:01:01.428 *********** 2025-06-22 19:50:29.232554 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:29.232572 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:29.232590 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.232624 | orchestrator | 2025-06-22 19:50:29.232642 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-22 19:50:29.232659 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.132) 0:01:01.560 *********** 2025-06-22 19:50:29.232677 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.232695 | orchestrator | 2025-06-22 19:50:29.232712 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-22 19:50:29.232728 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.117) 0:01:01.678 *********** 2025-06-22 19:50:29.232747 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:29.232766 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:29.232783 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.232801 | orchestrator | 2025-06-22 19:50:29.232818 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-22 19:50:29.232835 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.146) 0:01:01.825 *********** 2025-06-22 19:50:29.232854 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.232872 | orchestrator | 2025-06-22 19:50:29.232890 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-22 19:50:29.232919 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.128) 0:01:01.953 *********** 2025-06-22 19:50:29.232939 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:29.232959 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:29.232978 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:29.232995 | orchestrator | 2025-06-22 19:50:29.233012 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-22 19:50:29.233030 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.136) 0:01:02.089 *********** 2025-06-22 19:50:29.233049 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:50:29.233066 | orchestrator | 2025-06-22 19:50:29.233084 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-22 19:50:29.233101 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.132) 0:01:02.222 *********** 2025-06-22 19:50:29.233138 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:34.818566 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:34.818675 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.818695 | orchestrator | 2025-06-22 19:50:34.818708 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-22 19:50:34.818720 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:00.287) 0:01:02.509 *********** 2025-06-22 19:50:34.818731 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:34.818746 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:34.818764 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.818781 | orchestrator | 2025-06-22 19:50:34.818799 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-22 19:50:34.818817 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:00.123) 0:01:02.632 *********** 2025-06-22 19:50:34.818835 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:34.818880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:34.818899 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.818918 | orchestrator | 2025-06-22 19:50:34.818936 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-22 19:50:34.818955 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:00.141) 0:01:02.774 *********** 2025-06-22 19:50:34.818973 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.818991 | orchestrator | 2025-06-22 19:50:34.819008 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-22 19:50:34.819027 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:00.130) 0:01:02.904 *********** 2025-06-22 19:50:34.819045 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.819064 | orchestrator | 2025-06-22 19:50:34.819082 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-22 19:50:34.819100 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:00.132) 0:01:03.036 *********** 2025-06-22 19:50:34.819118 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.819137 | orchestrator | 2025-06-22 19:50:34.819153 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-22 19:50:34.819167 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:00.130) 0:01:03.167 *********** 2025-06-22 19:50:34.819179 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:50:34.819192 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-22 19:50:34.819204 | orchestrator | } 2025-06-22 19:50:34.819253 | orchestrator | 2025-06-22 19:50:34.819273 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-22 19:50:34.819294 | orchestrator | Sunday 22 June 2025 19:50:30 +0000 (0:00:00.138) 0:01:03.306 *********** 2025-06-22 19:50:34.819313 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:50:34.819326 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-22 19:50:34.819338 | orchestrator | } 2025-06-22 19:50:34.819357 | orchestrator | 2025-06-22 19:50:34.819373 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-22 19:50:34.819404 | orchestrator | Sunday 22 June 2025 19:50:30 +0000 (0:00:00.132) 0:01:03.439 *********** 2025-06-22 19:50:34.819423 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:50:34.819441 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-22 19:50:34.819459 | orchestrator | } 2025-06-22 19:50:34.819477 | orchestrator | 2025-06-22 19:50:34.819494 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-22 19:50:34.819511 | orchestrator | Sunday 22 June 2025 19:50:30 +0000 (0:00:00.126) 0:01:03.565 *********** 2025-06-22 19:50:34.819530 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:50:34.819548 | orchestrator | 2025-06-22 19:50:34.819565 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-22 19:50:34.819584 | orchestrator | Sunday 22 June 2025 19:50:30 +0000 (0:00:00.494) 0:01:04.060 *********** 2025-06-22 19:50:34.819602 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:50:34.819619 | orchestrator | 2025-06-22 19:50:34.819637 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-22 19:50:34.819656 | orchestrator | Sunday 22 June 2025 19:50:31 +0000 (0:00:00.486) 0:01:04.547 *********** 2025-06-22 19:50:34.819674 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:50:34.819692 | orchestrator | 2025-06-22 19:50:34.819710 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-22 19:50:34.819727 | orchestrator | Sunday 22 June 2025 19:50:31 +0000 (0:00:00.498) 0:01:05.046 *********** 2025-06-22 19:50:34.819745 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:50:34.819763 | orchestrator | 2025-06-22 19:50:34.819781 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-22 19:50:34.819800 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:00.281) 0:01:05.328 *********** 2025-06-22 19:50:34.819835 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.819854 | orchestrator | 2025-06-22 19:50:34.819873 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-22 19:50:34.819910 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:00.124) 0:01:05.452 *********** 2025-06-22 19:50:34.819929 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.819946 | orchestrator | 2025-06-22 19:50:34.819964 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-22 19:50:34.819982 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:00.117) 0:01:05.570 *********** 2025-06-22 19:50:34.820000 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:50:34.820019 | orchestrator |  "vgs_report": { 2025-06-22 19:50:34.820037 | orchestrator |  "vg": [] 2025-06-22 19:50:34.820079 | orchestrator |  } 2025-06-22 19:50:34.820098 | orchestrator | } 2025-06-22 19:50:34.820115 | orchestrator | 2025-06-22 19:50:34.820134 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-22 19:50:34.820153 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:00.131) 0:01:05.702 *********** 2025-06-22 19:50:34.820171 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.820186 | orchestrator | 2025-06-22 19:50:34.820197 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-22 19:50:34.820208 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:00.126) 0:01:05.829 *********** 2025-06-22 19:50:34.820244 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.820255 | orchestrator | 2025-06-22 19:50:34.820266 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-22 19:50:34.820276 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:00.130) 0:01:05.959 *********** 2025-06-22 19:50:34.820287 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.820300 | orchestrator | 2025-06-22 19:50:34.820319 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-22 19:50:34.820337 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:00.125) 0:01:06.085 *********** 2025-06-22 19:50:34.820355 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.820372 | orchestrator | 2025-06-22 19:50:34.820390 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-22 19:50:34.820408 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:00.133) 0:01:06.219 *********** 2025-06-22 19:50:34.820425 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.820444 | orchestrator | 2025-06-22 19:50:34.820461 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-22 19:50:34.820480 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:00.124) 0:01:06.344 *********** 2025-06-22 19:50:34.820499 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.820518 | orchestrator | 2025-06-22 19:50:34.820537 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-22 19:50:34.820555 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:00.125) 0:01:06.469 *********** 2025-06-22 19:50:34.820574 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.820592 | orchestrator | 2025-06-22 19:50:34.820610 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-22 19:50:34.820629 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:00.137) 0:01:06.607 *********** 2025-06-22 19:50:34.820647 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.820666 | orchestrator | 2025-06-22 19:50:34.820684 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-22 19:50:34.820703 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:00.127) 0:01:06.735 *********** 2025-06-22 19:50:34.820721 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.820739 | orchestrator | 2025-06-22 19:50:34.820758 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-22 19:50:34.820776 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:00.269) 0:01:07.004 *********** 2025-06-22 19:50:34.820795 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.820828 | orchestrator | 2025-06-22 19:50:34.820847 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-22 19:50:34.820865 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:00.144) 0:01:07.148 *********** 2025-06-22 19:50:34.820882 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.820900 | orchestrator | 2025-06-22 19:50:34.820919 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-22 19:50:34.820937 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:00.130) 0:01:07.278 *********** 2025-06-22 19:50:34.820954 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.820973 | orchestrator | 2025-06-22 19:50:34.820991 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-22 19:50:34.821010 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:00.131) 0:01:07.410 *********** 2025-06-22 19:50:34.821028 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.821045 | orchestrator | 2025-06-22 19:50:34.821064 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-22 19:50:34.821082 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:00.127) 0:01:07.537 *********** 2025-06-22 19:50:34.821100 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.821119 | orchestrator | 2025-06-22 19:50:34.821137 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-22 19:50:34.821156 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:00.128) 0:01:07.666 *********** 2025-06-22 19:50:34.821174 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:34.821200 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:34.821300 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.821314 | orchestrator | 2025-06-22 19:50:34.821325 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-22 19:50:34.821335 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:00.151) 0:01:07.818 *********** 2025-06-22 19:50:34.821347 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:34.821358 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:34.821369 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:34.821379 | orchestrator | 2025-06-22 19:50:34.821390 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-22 19:50:34.821401 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:00.144) 0:01:07.963 *********** 2025-06-22 19:50:34.821424 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:37.530583 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:37.530671 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:37.530685 | orchestrator | 2025-06-22 19:50:37.530698 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-22 19:50:37.530711 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:00.133) 0:01:08.096 *********** 2025-06-22 19:50:37.530723 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:37.530735 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:37.530746 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:37.530777 | orchestrator | 2025-06-22 19:50:37.530789 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-22 19:50:37.530800 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:00.136) 0:01:08.232 *********** 2025-06-22 19:50:37.530811 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:37.530822 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:37.530833 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:37.530844 | orchestrator | 2025-06-22 19:50:37.530854 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-22 19:50:37.530865 | orchestrator | Sunday 22 June 2025 19:50:35 +0000 (0:00:00.135) 0:01:08.368 *********** 2025-06-22 19:50:37.530876 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:37.530887 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:37.530898 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:37.530909 | orchestrator | 2025-06-22 19:50:37.530919 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-22 19:50:37.530931 | orchestrator | Sunday 22 June 2025 19:50:35 +0000 (0:00:00.130) 0:01:08.499 *********** 2025-06-22 19:50:37.530952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:37.530972 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:37.530992 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:37.531006 | orchestrator | 2025-06-22 19:50:37.531017 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-22 19:50:37.531028 | orchestrator | Sunday 22 June 2025 19:50:35 +0000 (0:00:00.268) 0:01:08.767 *********** 2025-06-22 19:50:37.531038 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:37.531049 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:37.531060 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:37.531071 | orchestrator | 2025-06-22 19:50:37.531081 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-22 19:50:37.531092 | orchestrator | Sunday 22 June 2025 19:50:35 +0000 (0:00:00.143) 0:01:08.911 *********** 2025-06-22 19:50:37.531103 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:50:37.531114 | orchestrator | 2025-06-22 19:50:37.531138 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-22 19:50:37.531152 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:00.533) 0:01:09.445 *********** 2025-06-22 19:50:37.531163 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:50:37.531176 | orchestrator | 2025-06-22 19:50:37.531188 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-22 19:50:37.531200 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:00.485) 0:01:09.931 *********** 2025-06-22 19:50:37.531234 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:50:37.531249 | orchestrator | 2025-06-22 19:50:37.531264 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-22 19:50:37.531284 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:00.134) 0:01:10.065 *********** 2025-06-22 19:50:37.531304 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'vg_name': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'}) 2025-06-22 19:50:37.531340 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'vg_name': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'}) 2025-06-22 19:50:37.531352 | orchestrator | 2025-06-22 19:50:37.531362 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-22 19:50:37.531373 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:00.150) 0:01:10.216 *********** 2025-06-22 19:50:37.531401 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:37.531413 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:37.531424 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:37.531435 | orchestrator | 2025-06-22 19:50:37.531446 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-22 19:50:37.531457 | orchestrator | Sunday 22 June 2025 19:50:37 +0000 (0:00:00.135) 0:01:10.351 *********** 2025-06-22 19:50:37.531468 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:37.531479 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:37.531491 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:37.531501 | orchestrator | 2025-06-22 19:50:37.531512 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-22 19:50:37.531523 | orchestrator | Sunday 22 June 2025 19:50:37 +0000 (0:00:00.134) 0:01:10.485 *********** 2025-06-22 19:50:37.531534 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'})  2025-06-22 19:50:37.531545 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'})  2025-06-22 19:50:37.531556 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:37.531567 | orchestrator | 2025-06-22 19:50:37.531578 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-22 19:50:37.531589 | orchestrator | Sunday 22 June 2025 19:50:37 +0000 (0:00:00.149) 0:01:10.634 *********** 2025-06-22 19:50:37.531600 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:50:37.531611 | orchestrator |  "lvm_report": { 2025-06-22 19:50:37.531622 | orchestrator |  "lv": [ 2025-06-22 19:50:37.531632 | orchestrator |  { 2025-06-22 19:50:37.531644 | orchestrator |  "lv_name": "osd-block-60bbbdec-af53-55ad-b293-31f676104815", 2025-06-22 19:50:37.531655 | orchestrator |  "vg_name": "ceph-60bbbdec-af53-55ad-b293-31f676104815" 2025-06-22 19:50:37.531666 | orchestrator |  }, 2025-06-22 19:50:37.531676 | orchestrator |  { 2025-06-22 19:50:37.531687 | orchestrator |  "lv_name": "osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3", 2025-06-22 19:50:37.531698 | orchestrator |  "vg_name": "ceph-b2f14396-315c-50f9-a6a7-8817318b41c3" 2025-06-22 19:50:37.531709 | orchestrator |  } 2025-06-22 19:50:37.531720 | orchestrator |  ], 2025-06-22 19:50:37.531730 | orchestrator |  "pv": [ 2025-06-22 19:50:37.531741 | orchestrator |  { 2025-06-22 19:50:37.531752 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-22 19:50:37.531763 | orchestrator |  "vg_name": "ceph-b2f14396-315c-50f9-a6a7-8817318b41c3" 2025-06-22 19:50:37.531774 | orchestrator |  }, 2025-06-22 19:50:37.531784 | orchestrator |  { 2025-06-22 19:50:37.531795 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-22 19:50:37.531806 | orchestrator |  "vg_name": "ceph-60bbbdec-af53-55ad-b293-31f676104815" 2025-06-22 19:50:37.531825 | orchestrator |  } 2025-06-22 19:50:37.531835 | orchestrator |  ] 2025-06-22 19:50:37.531846 | orchestrator |  } 2025-06-22 19:50:37.531857 | orchestrator | } 2025-06-22 19:50:37.531868 | orchestrator | 2025-06-22 19:50:37.531927 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:50:37.531941 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-22 19:50:37.531952 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-22 19:50:37.531969 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-22 19:50:37.531981 | orchestrator | 2025-06-22 19:50:37.531992 | orchestrator | 2025-06-22 19:50:37.532003 | orchestrator | 2025-06-22 19:50:37.532014 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:50:37.532025 | orchestrator | Sunday 22 June 2025 19:50:37 +0000 (0:00:00.151) 0:01:10.786 *********** 2025-06-22 19:50:37.532036 | orchestrator | =============================================================================== 2025-06-22 19:50:37.532047 | orchestrator | Create block VGs -------------------------------------------------------- 5.70s 2025-06-22 19:50:37.532057 | orchestrator | Create block LVs -------------------------------------------------------- 3.99s 2025-06-22 19:50:37.532068 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.89s 2025-06-22 19:50:37.532079 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.59s 2025-06-22 19:50:37.532090 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.59s 2025-06-22 19:50:37.532101 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.57s 2025-06-22 19:50:37.532112 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.52s 2025-06-22 19:50:37.532122 | orchestrator | Add known partitions to the list of available block devices ------------- 1.43s 2025-06-22 19:50:37.532141 | orchestrator | Add known links to the list of available block devices ------------------ 1.23s 2025-06-22 19:50:37.768411 | orchestrator | Add known partitions to the list of available block devices ------------- 1.09s 2025-06-22 19:50:37.768495 | orchestrator | Print LVM report data --------------------------------------------------- 0.93s 2025-06-22 19:50:37.768508 | orchestrator | Add known partitions to the list of available block devices ------------- 0.85s 2025-06-22 19:50:37.768519 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-06-22 19:50:37.768530 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.73s 2025-06-22 19:50:37.768540 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2025-06-22 19:50:37.768551 | orchestrator | Create dict of block VGs -> PVs from ceph_osd_devices ------------------- 0.69s 2025-06-22 19:50:37.768562 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-06-22 19:50:37.768573 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2025-06-22 19:50:37.768583 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.66s 2025-06-22 19:50:37.768594 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-06-22 19:50:39.538896 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:50:39.538955 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:50:39.539006 | orchestrator | Registering Redlock._release_script 2025-06-22 19:50:39.596093 | orchestrator | 2025-06-22 19:50:39 | INFO  | Task 51672183-8e8a-47da-8a81-94dd6fb75a5d (facts) was prepared for execution. 2025-06-22 19:50:39.596150 | orchestrator | 2025-06-22 19:50:39 | INFO  | It takes a moment until task 51672183-8e8a-47da-8a81-94dd6fb75a5d (facts) has been started and output is visible here. 2025-06-22 19:50:50.662062 | orchestrator | 2025-06-22 19:50:50.662154 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-22 19:50:50.662169 | orchestrator | 2025-06-22 19:50:50.662209 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 19:50:50.662258 | orchestrator | Sunday 22 June 2025 19:50:43 +0000 (0:00:00.244) 0:00:00.244 *********** 2025-06-22 19:50:50.662271 | orchestrator | ok: [testbed-manager] 2025-06-22 19:50:50.662283 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:50:50.662294 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:50:50.662305 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:50:50.662315 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:50:50.662326 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:50:50.662337 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:50:50.662348 | orchestrator | 2025-06-22 19:50:50.662359 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 19:50:50.662370 | orchestrator | Sunday 22 June 2025 19:50:44 +0000 (0:00:00.999) 0:00:01.244 *********** 2025-06-22 19:50:50.662381 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:50:50.662392 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:50:50.662403 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:50:50.662413 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:50:50.662424 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:50:50.662435 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:50.662445 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:50.662456 | orchestrator | 2025-06-22 19:50:50.662467 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:50:50.662478 | orchestrator | 2025-06-22 19:50:50.662488 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:50:50.662499 | orchestrator | Sunday 22 June 2025 19:50:45 +0000 (0:00:01.128) 0:00:02.372 *********** 2025-06-22 19:50:50.662510 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:50:50.662521 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:50:50.662532 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:50:50.662542 | orchestrator | ok: [testbed-manager] 2025-06-22 19:50:50.662553 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:50:50.662564 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:50:50.662575 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:50:50.662588 | orchestrator | 2025-06-22 19:50:50.662600 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 19:50:50.662612 | orchestrator | 2025-06-22 19:50:50.662624 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 19:50:50.662636 | orchestrator | Sunday 22 June 2025 19:50:49 +0000 (0:00:04.735) 0:00:07.108 *********** 2025-06-22 19:50:50.662648 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:50:50.662661 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:50:50.662673 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:50:50.662685 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:50:50.662698 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:50:50.662710 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:50:50.662722 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:50:50.662734 | orchestrator | 2025-06-22 19:50:50.662746 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:50:50.662758 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:50:50.662771 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:50:50.662783 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:50:50.662795 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:50:50.662830 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:50:50.662843 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:50:50.662855 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:50:50.662867 | orchestrator | 2025-06-22 19:50:50.662879 | orchestrator | 2025-06-22 19:50:50.662892 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:50:50.662904 | orchestrator | Sunday 22 June 2025 19:50:50 +0000 (0:00:00.453) 0:00:07.561 *********** 2025-06-22 19:50:50.662917 | orchestrator | =============================================================================== 2025-06-22 19:50:50.662928 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.74s 2025-06-22 19:50:50.662940 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.13s 2025-06-22 19:50:50.662950 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.00s 2025-06-22 19:50:50.662961 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.45s 2025-06-22 19:50:50.852773 | orchestrator | 2025-06-22 19:50:50.855209 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Jun 22 19:50:50 UTC 2025 2025-06-22 19:50:50.855275 | orchestrator | 2025-06-22 19:50:52.429087 | orchestrator | 2025-06-22 19:50:52 | INFO  | Collection nutshell is prepared for execution 2025-06-22 19:50:52.431277 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [0] - dotfiles 2025-06-22 19:50:52.435513 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:50:52.435585 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:50:52.435600 | orchestrator | Registering Redlock._release_script 2025-06-22 19:50:52.439469 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [0] - homer 2025-06-22 19:50:52.439540 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [0] - netdata 2025-06-22 19:50:52.439584 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [0] - openstackclient 2025-06-22 19:50:52.439604 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [0] - phpmyadmin 2025-06-22 19:50:52.439623 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [0] - common 2025-06-22 19:50:52.441684 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [1] -- loadbalancer 2025-06-22 19:50:52.441817 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [2] --- opensearch 2025-06-22 19:50:52.441832 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [2] --- mariadb-ng 2025-06-22 19:50:52.441842 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [3] ---- horizon 2025-06-22 19:50:52.442116 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [3] ---- keystone 2025-06-22 19:50:52.442362 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [4] ----- neutron 2025-06-22 19:50:52.442388 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [5] ------ wait-for-nova 2025-06-22 19:50:52.442407 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [5] ------ octavia 2025-06-22 19:50:52.443129 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [4] ----- barbican 2025-06-22 19:50:52.443451 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [4] ----- designate 2025-06-22 19:50:52.443548 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [4] ----- ironic 2025-06-22 19:50:52.443559 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [4] ----- placement 2025-06-22 19:50:52.443574 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [4] ----- magnum 2025-06-22 19:50:52.444423 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [1] -- openvswitch 2025-06-22 19:50:52.444454 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [2] --- ovn 2025-06-22 19:50:52.444501 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [1] -- memcached 2025-06-22 19:50:52.444522 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [1] -- redis 2025-06-22 19:50:52.444866 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [1] -- rabbitmq-ng 2025-06-22 19:50:52.444895 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [0] - kubernetes 2025-06-22 19:50:52.447198 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [1] -- kubeconfig 2025-06-22 19:50:52.447279 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [1] -- copy-kubeconfig 2025-06-22 19:50:52.447293 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [0] - ceph 2025-06-22 19:50:52.449305 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [1] -- ceph-pools 2025-06-22 19:50:52.449337 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [2] --- copy-ceph-keys 2025-06-22 19:50:52.449349 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [3] ---- cephclient 2025-06-22 19:50:52.449360 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-22 19:50:52.449681 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [4] ----- wait-for-keystone 2025-06-22 19:50:52.449704 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-22 19:50:52.449715 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [5] ------ glance 2025-06-22 19:50:52.449993 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [5] ------ cinder 2025-06-22 19:50:52.450370 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [5] ------ nova 2025-06-22 19:50:52.450398 | orchestrator | 2025-06-22 19:50:52 | INFO  | A [4] ----- prometheus 2025-06-22 19:50:52.450410 | orchestrator | 2025-06-22 19:50:52 | INFO  | D [5] ------ grafana 2025-06-22 19:50:52.609340 | orchestrator | 2025-06-22 19:50:52 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-22 19:50:52.609996 | orchestrator | 2025-06-22 19:50:52 | INFO  | Tasks are running in the background 2025-06-22 19:50:55.058467 | orchestrator | 2025-06-22 19:50:55 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-22 19:50:57.187345 | orchestrator | 2025-06-22 19:50:57 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:50:57.187540 | orchestrator | 2025-06-22 19:50:57 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:50:57.190969 | orchestrator | 2025-06-22 19:50:57 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:50:57.191355 | orchestrator | 2025-06-22 19:50:57 | INFO  | Task a73f0d29-d471-4918-a4e3-7ecb0228d157 is in state STARTED 2025-06-22 19:50:57.191918 | orchestrator | 2025-06-22 19:50:57 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:50:57.192479 | orchestrator | 2025-06-22 19:50:57 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:50:57.194155 | orchestrator | 2025-06-22 19:50:57 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:50:57.194182 | orchestrator | 2025-06-22 19:50:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:00.229866 | orchestrator | 2025-06-22 19:51:00 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:00.230335 | orchestrator | 2025-06-22 19:51:00 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:00.231430 | orchestrator | 2025-06-22 19:51:00 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:00.232033 | orchestrator | 2025-06-22 19:51:00 | INFO  | Task a73f0d29-d471-4918-a4e3-7ecb0228d157 is in state STARTED 2025-06-22 19:51:00.233665 | orchestrator | 2025-06-22 19:51:00 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:00.234129 | orchestrator | 2025-06-22 19:51:00 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:00.234614 | orchestrator | 2025-06-22 19:51:00 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:00.234648 | orchestrator | 2025-06-22 19:51:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:03.297468 | orchestrator | 2025-06-22 19:51:03 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:03.297731 | orchestrator | 2025-06-22 19:51:03 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:03.298400 | orchestrator | 2025-06-22 19:51:03 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:03.298826 | orchestrator | 2025-06-22 19:51:03 | INFO  | Task a73f0d29-d471-4918-a4e3-7ecb0228d157 is in state STARTED 2025-06-22 19:51:03.300855 | orchestrator | 2025-06-22 19:51:03 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:03.301440 | orchestrator | 2025-06-22 19:51:03 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:03.302107 | orchestrator | 2025-06-22 19:51:03 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:03.302132 | orchestrator | 2025-06-22 19:51:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:06.337481 | orchestrator | 2025-06-22 19:51:06 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:06.349798 | orchestrator | 2025-06-22 19:51:06 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:06.352828 | orchestrator | 2025-06-22 19:51:06 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:06.353661 | orchestrator | 2025-06-22 19:51:06 | INFO  | Task a73f0d29-d471-4918-a4e3-7ecb0228d157 is in state STARTED 2025-06-22 19:51:06.355057 | orchestrator | 2025-06-22 19:51:06 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:06.355680 | orchestrator | 2025-06-22 19:51:06 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:06.357741 | orchestrator | 2025-06-22 19:51:06 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:06.357783 | orchestrator | 2025-06-22 19:51:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:09.435154 | orchestrator | 2025-06-22 19:51:09 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:09.435699 | orchestrator | 2025-06-22 19:51:09 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:09.438700 | orchestrator | 2025-06-22 19:51:09 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:09.439118 | orchestrator | 2025-06-22 19:51:09 | INFO  | Task a73f0d29-d471-4918-a4e3-7ecb0228d157 is in state STARTED 2025-06-22 19:51:09.441258 | orchestrator | 2025-06-22 19:51:09 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:09.441664 | orchestrator | 2025-06-22 19:51:09 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:09.443441 | orchestrator | 2025-06-22 19:51:09 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:09.443464 | orchestrator | 2025-06-22 19:51:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:12.505967 | orchestrator | 2025-06-22 19:51:12 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:12.508529 | orchestrator | 2025-06-22 19:51:12 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:12.510865 | orchestrator | 2025-06-22 19:51:12 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:12.518823 | orchestrator | 2025-06-22 19:51:12 | INFO  | Task a73f0d29-d471-4918-a4e3-7ecb0228d157 is in state STARTED 2025-06-22 19:51:12.529345 | orchestrator | 2025-06-22 19:51:12 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:12.531465 | orchestrator | 2025-06-22 19:51:12 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:12.536265 | orchestrator | 2025-06-22 19:51:12 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:12.536297 | orchestrator | 2025-06-22 19:51:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:15.577923 | orchestrator | 2025-06-22 19:51:15 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:15.578129 | orchestrator | 2025-06-22 19:51:15 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:15.578155 | orchestrator | 2025-06-22 19:51:15 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:15.579380 | orchestrator | 2025-06-22 19:51:15 | INFO  | Task a73f0d29-d471-4918-a4e3-7ecb0228d157 is in state STARTED 2025-06-22 19:51:15.582377 | orchestrator | 2025-06-22 19:51:15 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:15.582399 | orchestrator | 2025-06-22 19:51:15 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:15.582750 | orchestrator | 2025-06-22 19:51:15 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:15.582766 | orchestrator | 2025-06-22 19:51:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:18.673204 | orchestrator | 2025-06-22 19:51:18 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:18.674678 | orchestrator | 2025-06-22 19:51:18 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:18.682690 | orchestrator | 2025-06-22 19:51:18.682739 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-22 19:51:18.682749 | orchestrator | 2025-06-22 19:51:18.682766 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-22 19:51:18.682774 | orchestrator | Sunday 22 June 2025 19:51:03 +0000 (0:00:00.399) 0:00:00.399 *********** 2025-06-22 19:51:18.682781 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:18.682789 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:51:18.682796 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:51:18.682804 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:51:18.682811 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:51:18.682818 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:51:18.682825 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:51:18.682833 | orchestrator | 2025-06-22 19:51:18.682840 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-22 19:51:18.682847 | orchestrator | Sunday 22 June 2025 19:51:06 +0000 (0:00:03.254) 0:00:03.654 *********** 2025-06-22 19:51:18.682855 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-22 19:51:18.682862 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-22 19:51:18.682869 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-22 19:51:18.682877 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-22 19:51:18.682902 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-22 19:51:18.682910 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-22 19:51:18.682917 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-22 19:51:18.682924 | orchestrator | 2025-06-22 19:51:18.682931 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-22 19:51:18.682939 | orchestrator | Sunday 22 June 2025 19:51:08 +0000 (0:00:02.453) 0:00:06.108 *********** 2025-06-22 19:51:18.682949 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:51:07.286952', 'end': '2025-06-22 19:51:07.292793', 'delta': '0:00:00.005841', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:51:18.682968 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:51:07.242091', 'end': '2025-06-22 19:51:07.253027', 'delta': '0:00:00.010936', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:51:18.682976 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:51:07.290273', 'end': '2025-06-22 19:51:07.297666', 'delta': '0:00:00.007393', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:51:18.683000 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:51:07.699153', 'end': '2025-06-22 19:51:07.707026', 'delta': '0:00:00.007873', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:51:18.683009 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:51:08.002481', 'end': '2025-06-22 19:51:08.011001', 'delta': '0:00:00.008520', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:51:18.683021 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:51:08.310757', 'end': '2025-06-22 19:51:08.321088', 'delta': '0:00:00.010331', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:51:18.683029 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:51:08.602021', 'end': '2025-06-22 19:51:08.608411', 'delta': '0:00:00.006390', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:51:18.683037 | orchestrator | 2025-06-22 19:51:18.683044 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-22 19:51:18.683052 | orchestrator | Sunday 22 June 2025 19:51:11 +0000 (0:00:02.647) 0:00:08.755 *********** 2025-06-22 19:51:18.683059 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-22 19:51:18.683066 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-22 19:51:18.683073 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-22 19:51:18.683080 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-22 19:51:18.683088 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-22 19:51:18.683095 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-22 19:51:18.683102 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-22 19:51:18.683109 | orchestrator | 2025-06-22 19:51:18.683117 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-22 19:51:18.683124 | orchestrator | Sunday 22 June 2025 19:51:13 +0000 (0:00:01.994) 0:00:10.750 *********** 2025-06-22 19:51:18.683131 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-22 19:51:18.683138 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-22 19:51:18.683145 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-22 19:51:18.683152 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-22 19:51:18.683159 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-22 19:51:18.683166 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-22 19:51:18.683173 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-22 19:51:18.683189 | orchestrator | 2025-06-22 19:51:18.683197 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:51:18.683209 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:18.683218 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:18.683225 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:18.683253 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:18.683261 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:18.683268 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:18.683275 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:18.683282 | orchestrator | 2025-06-22 19:51:18.683289 | orchestrator | 2025-06-22 19:51:18.683297 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:51:18.683304 | orchestrator | Sunday 22 June 2025 19:51:17 +0000 (0:00:03.813) 0:00:14.563 *********** 2025-06-22 19:51:18.683311 | orchestrator | =============================================================================== 2025-06-22 19:51:18.683318 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.81s 2025-06-22 19:51:18.683325 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.25s 2025-06-22 19:51:18.683332 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.65s 2025-06-22 19:51:18.683339 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.45s 2025-06-22 19:51:18.683346 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.99s 2025-06-22 19:51:18.683353 | orchestrator | 2025-06-22 19:51:18 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:18.683361 | orchestrator | 2025-06-22 19:51:18 | INFO  | Task a73f0d29-d471-4918-a4e3-7ecb0228d157 is in state SUCCESS 2025-06-22 19:51:18.683368 | orchestrator | 2025-06-22 19:51:18 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:18.683375 | orchestrator | 2025-06-22 19:51:18 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:18.683383 | orchestrator | 2025-06-22 19:51:18 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:18.683577 | orchestrator | 2025-06-22 19:51:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:21.726406 | orchestrator | 2025-06-22 19:51:21 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:21.729501 | orchestrator | 2025-06-22 19:51:21 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:21.729530 | orchestrator | 2025-06-22 19:51:21 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:21.735003 | orchestrator | 2025-06-22 19:51:21 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:21.735852 | orchestrator | 2025-06-22 19:51:21 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:21.738529 | orchestrator | 2025-06-22 19:51:21 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:21.743151 | orchestrator | 2025-06-22 19:51:21 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:21.743198 | orchestrator | 2025-06-22 19:51:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:24.787747 | orchestrator | 2025-06-22 19:51:24 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:24.787834 | orchestrator | 2025-06-22 19:51:24 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:24.790886 | orchestrator | 2025-06-22 19:51:24 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:24.792466 | orchestrator | 2025-06-22 19:51:24 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:24.794086 | orchestrator | 2025-06-22 19:51:24 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:24.795396 | orchestrator | 2025-06-22 19:51:24 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:24.796468 | orchestrator | 2025-06-22 19:51:24 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:24.796493 | orchestrator | 2025-06-22 19:51:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:27.849101 | orchestrator | 2025-06-22 19:51:27 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:27.849201 | orchestrator | 2025-06-22 19:51:27 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:27.850728 | orchestrator | 2025-06-22 19:51:27 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:27.851387 | orchestrator | 2025-06-22 19:51:27 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:27.855298 | orchestrator | 2025-06-22 19:51:27 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:27.855347 | orchestrator | 2025-06-22 19:51:27 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:27.855355 | orchestrator | 2025-06-22 19:51:27 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:27.855362 | orchestrator | 2025-06-22 19:51:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:30.909708 | orchestrator | 2025-06-22 19:51:30 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:30.910267 | orchestrator | 2025-06-22 19:51:30 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:30.912511 | orchestrator | 2025-06-22 19:51:30 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:30.912548 | orchestrator | 2025-06-22 19:51:30 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:30.914261 | orchestrator | 2025-06-22 19:51:30 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:30.917637 | orchestrator | 2025-06-22 19:51:30 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:30.917670 | orchestrator | 2025-06-22 19:51:30 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:30.917696 | orchestrator | 2025-06-22 19:51:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:33.963514 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:33.968132 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:33.970171 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:33.971445 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:33.972925 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:33.976323 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:33.976834 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:33.977168 | orchestrator | 2025-06-22 19:51:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:37.026313 | orchestrator | 2025-06-22 19:51:37 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:37.030873 | orchestrator | 2025-06-22 19:51:37 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:37.033375 | orchestrator | 2025-06-22 19:51:37 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:37.040616 | orchestrator | 2025-06-22 19:51:37 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state STARTED 2025-06-22 19:51:37.057297 | orchestrator | 2025-06-22 19:51:37 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:37.060595 | orchestrator | 2025-06-22 19:51:37 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:37.068626 | orchestrator | 2025-06-22 19:51:37 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:37.068670 | orchestrator | 2025-06-22 19:51:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:40.159642 | orchestrator | 2025-06-22 19:51:40 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:40.159809 | orchestrator | 2025-06-22 19:51:40 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:40.159824 | orchestrator | 2025-06-22 19:51:40 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:40.159850 | orchestrator | 2025-06-22 19:51:40 | INFO  | Task 90ed5160-5ddd-4a9f-9f91-bda44291c36f is in state SUCCESS 2025-06-22 19:51:40.164842 | orchestrator | 2025-06-22 19:51:40 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:40.167612 | orchestrator | 2025-06-22 19:51:40 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:40.167888 | orchestrator | 2025-06-22 19:51:40 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:40.168028 | orchestrator | 2025-06-22 19:51:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:43.198292 | orchestrator | 2025-06-22 19:51:43 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:43.198768 | orchestrator | 2025-06-22 19:51:43 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:43.199769 | orchestrator | 2025-06-22 19:51:43 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:43.200690 | orchestrator | 2025-06-22 19:51:43 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:43.201718 | orchestrator | 2025-06-22 19:51:43 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:43.202404 | orchestrator | 2025-06-22 19:51:43 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:43.202686 | orchestrator | 2025-06-22 19:51:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:46.243498 | orchestrator | 2025-06-22 19:51:46 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:46.244315 | orchestrator | 2025-06-22 19:51:46 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:46.244736 | orchestrator | 2025-06-22 19:51:46 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:46.246553 | orchestrator | 2025-06-22 19:51:46 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:46.247031 | orchestrator | 2025-06-22 19:51:46 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:46.251437 | orchestrator | 2025-06-22 19:51:46 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:46.251461 | orchestrator | 2025-06-22 19:51:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:49.283674 | orchestrator | 2025-06-22 19:51:49 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:49.283756 | orchestrator | 2025-06-22 19:51:49 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state STARTED 2025-06-22 19:51:49.284585 | orchestrator | 2025-06-22 19:51:49 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:49.285131 | orchestrator | 2025-06-22 19:51:49 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:49.289233 | orchestrator | 2025-06-22 19:51:49 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:49.290114 | orchestrator | 2025-06-22 19:51:49 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:49.290140 | orchestrator | 2025-06-22 19:51:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:52.341793 | orchestrator | 2025-06-22 19:51:52 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:52.341903 | orchestrator | 2025-06-22 19:51:52 | INFO  | Task db34cce9-3083-468a-b940-ab077af79cc2 is in state SUCCESS 2025-06-22 19:51:52.343923 | orchestrator | 2025-06-22 19:51:52 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:52.344077 | orchestrator | 2025-06-22 19:51:52 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:52.345579 | orchestrator | 2025-06-22 19:51:52 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:52.347511 | orchestrator | 2025-06-22 19:51:52 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:52.347582 | orchestrator | 2025-06-22 19:51:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:55.383120 | orchestrator | 2025-06-22 19:51:55 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:55.386913 | orchestrator | 2025-06-22 19:51:55 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:55.389430 | orchestrator | 2025-06-22 19:51:55 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:55.391804 | orchestrator | 2025-06-22 19:51:55 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:55.394647 | orchestrator | 2025-06-22 19:51:55 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:55.394690 | orchestrator | 2025-06-22 19:51:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:58.435107 | orchestrator | 2025-06-22 19:51:58 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:51:58.436322 | orchestrator | 2025-06-22 19:51:58 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:51:58.436382 | orchestrator | 2025-06-22 19:51:58 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:51:58.437415 | orchestrator | 2025-06-22 19:51:58 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:51:58.440934 | orchestrator | 2025-06-22 19:51:58 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:51:58.440965 | orchestrator | 2025-06-22 19:51:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:01.477707 | orchestrator | 2025-06-22 19:52:01 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state STARTED 2025-06-22 19:52:01.479815 | orchestrator | 2025-06-22 19:52:01 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:01.480477 | orchestrator | 2025-06-22 19:52:01 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:01.481622 | orchestrator | 2025-06-22 19:52:01 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:01.482202 | orchestrator | 2025-06-22 19:52:01 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:01.482235 | orchestrator | 2025-06-22 19:52:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:04.518677 | orchestrator | 2025-06-22 19:52:04.518787 | orchestrator | 2025-06-22 19:52:04.518812 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-22 19:52:04.518826 | orchestrator | 2025-06-22 19:52:04.518838 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-22 19:52:04.518850 | orchestrator | Sunday 22 June 2025 19:51:04 +0000 (0:00:00.533) 0:00:00.533 *********** 2025-06-22 19:52:04.518861 | orchestrator | ok: [testbed-manager] => { 2025-06-22 19:52:04.518873 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-22 19:52:04.518885 | orchestrator | } 2025-06-22 19:52:04.518897 | orchestrator | 2025-06-22 19:52:04.518908 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-22 19:52:04.518920 | orchestrator | Sunday 22 June 2025 19:51:05 +0000 (0:00:00.232) 0:00:00.765 *********** 2025-06-22 19:52:04.518944 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:04.518956 | orchestrator | 2025-06-22 19:52:04.518967 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-22 19:52:04.518979 | orchestrator | Sunday 22 June 2025 19:51:06 +0000 (0:00:01.584) 0:00:02.350 *********** 2025-06-22 19:52:04.518990 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-22 19:52:04.519001 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-22 19:52:04.519012 | orchestrator | 2025-06-22 19:52:04.519023 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-22 19:52:04.519035 | orchestrator | Sunday 22 June 2025 19:51:08 +0000 (0:00:01.551) 0:00:03.901 *********** 2025-06-22 19:52:04.519045 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.519057 | orchestrator | 2025-06-22 19:52:04.519068 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-22 19:52:04.519079 | orchestrator | Sunday 22 June 2025 19:51:10 +0000 (0:00:02.464) 0:00:06.366 *********** 2025-06-22 19:52:04.519090 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.519101 | orchestrator | 2025-06-22 19:52:04.519112 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-22 19:52:04.519124 | orchestrator | Sunday 22 June 2025 19:51:12 +0000 (0:00:01.767) 0:00:08.134 *********** 2025-06-22 19:52:04.519136 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-22 19:52:04.519165 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:04.519177 | orchestrator | 2025-06-22 19:52:04.519188 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-22 19:52:04.519226 | orchestrator | Sunday 22 June 2025 19:51:36 +0000 (0:00:23.898) 0:00:32.032 *********** 2025-06-22 19:52:04.519240 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.519252 | orchestrator | 2025-06-22 19:52:04.519288 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:52:04.519302 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:52:04.519315 | orchestrator | 2025-06-22 19:52:04.519327 | orchestrator | 2025-06-22 19:52:04.519340 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:52:04.519352 | orchestrator | Sunday 22 June 2025 19:51:38 +0000 (0:00:02.329) 0:00:34.362 *********** 2025-06-22 19:52:04.519365 | orchestrator | =============================================================================== 2025-06-22 19:52:04.519378 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 23.90s 2025-06-22 19:52:04.519390 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.46s 2025-06-22 19:52:04.519401 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.33s 2025-06-22 19:52:04.519412 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.77s 2025-06-22 19:52:04.519423 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.58s 2025-06-22 19:52:04.519433 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.55s 2025-06-22 19:52:04.519444 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.23s 2025-06-22 19:52:04.519455 | orchestrator | 2025-06-22 19:52:04.519466 | orchestrator | 2025-06-22 19:52:04.519493 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-22 19:52:04.519505 | orchestrator | 2025-06-22 19:52:04.519516 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-22 19:52:04.519527 | orchestrator | Sunday 22 June 2025 19:51:05 +0000 (0:00:00.599) 0:00:00.599 *********** 2025-06-22 19:52:04.519538 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-22 19:52:04.519550 | orchestrator | 2025-06-22 19:52:04.519561 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-22 19:52:04.519572 | orchestrator | Sunday 22 June 2025 19:51:06 +0000 (0:00:00.623) 0:00:01.222 *********** 2025-06-22 19:52:04.519595 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-22 19:52:04.519606 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-22 19:52:04.519617 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-22 19:52:04.519628 | orchestrator | 2025-06-22 19:52:04.519640 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-22 19:52:04.519650 | orchestrator | Sunday 22 June 2025 19:51:08 +0000 (0:00:02.233) 0:00:03.456 *********** 2025-06-22 19:52:04.519661 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.519672 | orchestrator | 2025-06-22 19:52:04.519683 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-22 19:52:04.519694 | orchestrator | Sunday 22 June 2025 19:51:10 +0000 (0:00:01.904) 0:00:05.360 *********** 2025-06-22 19:52:04.519728 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-22 19:52:04.519740 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:04.519752 | orchestrator | 2025-06-22 19:52:04.519763 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-22 19:52:04.519774 | orchestrator | Sunday 22 June 2025 19:51:45 +0000 (0:00:35.459) 0:00:40.819 *********** 2025-06-22 19:52:04.519792 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.519803 | orchestrator | 2025-06-22 19:52:04.519814 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-22 19:52:04.519826 | orchestrator | Sunday 22 June 2025 19:51:46 +0000 (0:00:00.995) 0:00:41.815 *********** 2025-06-22 19:52:04.519837 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:04.519847 | orchestrator | 2025-06-22 19:52:04.519859 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-22 19:52:04.519870 | orchestrator | Sunday 22 June 2025 19:51:47 +0000 (0:00:01.124) 0:00:42.939 *********** 2025-06-22 19:52:04.519881 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.519892 | orchestrator | 2025-06-22 19:52:04.519903 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-22 19:52:04.519914 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:01.460) 0:00:44.400 *********** 2025-06-22 19:52:04.519925 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.519936 | orchestrator | 2025-06-22 19:52:04.519946 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-22 19:52:04.519957 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:00.662) 0:00:45.063 *********** 2025-06-22 19:52:04.519968 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.519979 | orchestrator | 2025-06-22 19:52:04.520092 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-22 19:52:04.520109 | orchestrator | Sunday 22 June 2025 19:51:50 +0000 (0:00:00.579) 0:00:45.643 *********** 2025-06-22 19:52:04.520120 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:04.520131 | orchestrator | 2025-06-22 19:52:04.520141 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:52:04.520153 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:52:04.520164 | orchestrator | 2025-06-22 19:52:04.520174 | orchestrator | 2025-06-22 19:52:04.520185 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:52:04.520196 | orchestrator | Sunday 22 June 2025 19:51:50 +0000 (0:00:00.351) 0:00:45.994 *********** 2025-06-22 19:52:04.520207 | orchestrator | =============================================================================== 2025-06-22 19:52:04.520218 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.46s 2025-06-22 19:52:04.520229 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.23s 2025-06-22 19:52:04.520240 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.90s 2025-06-22 19:52:04.520251 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.46s 2025-06-22 19:52:04.520301 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.12s 2025-06-22 19:52:04.520315 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.00s 2025-06-22 19:52:04.520326 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.66s 2025-06-22 19:52:04.520336 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.62s 2025-06-22 19:52:04.520347 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.58s 2025-06-22 19:52:04.520358 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.35s 2025-06-22 19:52:04.520369 | orchestrator | 2025-06-22 19:52:04.520380 | orchestrator | 2025-06-22 19:52:04.520391 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:52:04.520402 | orchestrator | 2025-06-22 19:52:04.520412 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:52:04.520423 | orchestrator | Sunday 22 June 2025 19:51:03 +0000 (0:00:00.547) 0:00:00.547 *********** 2025-06-22 19:52:04.520434 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-22 19:52:04.520445 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-22 19:52:04.520464 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-22 19:52:04.520475 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-22 19:52:04.520485 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-22 19:52:04.520496 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-22 19:52:04.520507 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-22 19:52:04.520517 | orchestrator | 2025-06-22 19:52:04.520528 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-22 19:52:04.520539 | orchestrator | 2025-06-22 19:52:04.520549 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-22 19:52:04.520560 | orchestrator | Sunday 22 June 2025 19:51:04 +0000 (0:00:01.932) 0:00:02.480 *********** 2025-06-22 19:52:04.520696 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:52:04.520734 | orchestrator | 2025-06-22 19:52:04.520755 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-22 19:52:04.520773 | orchestrator | Sunday 22 June 2025 19:51:08 +0000 (0:00:03.084) 0:00:05.565 *********** 2025-06-22 19:52:04.520784 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:04.520795 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:04.520806 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:04.520817 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:04.520827 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:52:04.520848 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:52:04.520860 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:52:04.520870 | orchestrator | 2025-06-22 19:52:04.520881 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-22 19:52:04.520892 | orchestrator | Sunday 22 June 2025 19:51:10 +0000 (0:00:02.005) 0:00:07.571 *********** 2025-06-22 19:52:04.520903 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:04.520914 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:04.520925 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:04.520935 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:52:04.520946 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:04.520956 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:52:04.520967 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:52:04.520977 | orchestrator | 2025-06-22 19:52:04.520988 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-22 19:52:04.520999 | orchestrator | Sunday 22 June 2025 19:51:13 +0000 (0:00:03.217) 0:00:10.788 *********** 2025-06-22 19:52:04.521010 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.521021 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:04.521032 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:04.521042 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:04.521053 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:04.521064 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:04.521074 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:04.521085 | orchestrator | 2025-06-22 19:52:04.521096 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-22 19:52:04.521107 | orchestrator | Sunday 22 June 2025 19:51:16 +0000 (0:00:02.844) 0:00:13.633 *********** 2025-06-22 19:52:04.521117 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.521130 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:04.521148 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:04.521167 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:04.521185 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:04.521203 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:04.521221 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:04.521241 | orchestrator | 2025-06-22 19:52:04.521333 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-22 19:52:04.521360 | orchestrator | Sunday 22 June 2025 19:51:26 +0000 (0:00:10.326) 0:00:23.960 *********** 2025-06-22 19:52:04.521396 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.521417 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:04.521437 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:04.521458 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:04.521479 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:04.521500 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:04.521520 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:04.521541 | orchestrator | 2025-06-22 19:52:04.521588 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-22 19:52:04.521608 | orchestrator | Sunday 22 June 2025 19:51:43 +0000 (0:00:17.410) 0:00:41.371 *********** 2025-06-22 19:52:04.521629 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:52:04.521650 | orchestrator | 2025-06-22 19:52:04.521663 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-22 19:52:04.521675 | orchestrator | Sunday 22 June 2025 19:51:45 +0000 (0:00:01.716) 0:00:43.087 *********** 2025-06-22 19:52:04.521688 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-22 19:52:04.521699 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-22 19:52:04.521710 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-22 19:52:04.521721 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-22 19:52:04.521731 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-22 19:52:04.521742 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-22 19:52:04.521752 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-22 19:52:04.521763 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-22 19:52:04.521774 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-22 19:52:04.521817 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-22 19:52:04.521830 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-22 19:52:04.521841 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-22 19:52:04.521851 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-22 19:52:04.521862 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-22 19:52:04.521872 | orchestrator | 2025-06-22 19:52:04.521882 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-22 19:52:04.521892 | orchestrator | Sunday 22 June 2025 19:51:50 +0000 (0:00:05.041) 0:00:48.129 *********** 2025-06-22 19:52:04.521901 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:04.521911 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:04.521920 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:04.521930 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:04.521939 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:52:04.521949 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:52:04.521958 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:52:04.521967 | orchestrator | 2025-06-22 19:52:04.521977 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-22 19:52:04.521986 | orchestrator | Sunday 22 June 2025 19:51:52 +0000 (0:00:01.412) 0:00:49.542 *********** 2025-06-22 19:52:04.521996 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:04.522006 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.522015 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:04.522079 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:04.522089 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:04.522098 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:04.522108 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:04.522117 | orchestrator | 2025-06-22 19:52:04.522127 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-22 19:52:04.522163 | orchestrator | Sunday 22 June 2025 19:51:53 +0000 (0:00:01.342) 0:00:50.884 *********** 2025-06-22 19:52:04.522178 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:04.522188 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:04.522197 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:04.522207 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:04.522216 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:52:04.522226 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:52:04.522235 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:52:04.522245 | orchestrator | 2025-06-22 19:52:04.522254 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-22 19:52:04.522289 | orchestrator | Sunday 22 June 2025 19:51:54 +0000 (0:00:01.384) 0:00:52.269 *********** 2025-06-22 19:52:04.522301 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:04.522310 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:04.522320 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:04.522329 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:04.522339 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:52:04.522348 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:52:04.522357 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:52:04.522367 | orchestrator | 2025-06-22 19:52:04.522377 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-22 19:52:04.522386 | orchestrator | Sunday 22 June 2025 19:51:56 +0000 (0:00:01.504) 0:00:53.773 *********** 2025-06-22 19:52:04.522396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-22 19:52:04.522407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:52:04.522418 | orchestrator | 2025-06-22 19:52:04.522427 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-22 19:52:04.522437 | orchestrator | Sunday 22 June 2025 19:51:57 +0000 (0:00:01.115) 0:00:54.889 *********** 2025-06-22 19:52:04.522447 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.522456 | orchestrator | 2025-06-22 19:52:04.522466 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-22 19:52:04.522476 | orchestrator | Sunday 22 June 2025 19:51:59 +0000 (0:00:01.648) 0:00:56.537 *********** 2025-06-22 19:52:04.522485 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:04.522495 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:04.522505 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:04.522514 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:04.522524 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:04.522533 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:04.522543 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:04.522552 | orchestrator | 2025-06-22 19:52:04.522562 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:52:04.522572 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:52:04.522582 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:52:04.522591 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:52:04.522601 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:52:04.522611 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:52:04.522621 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:52:04.522636 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:52:04.522660 | orchestrator | 2025-06-22 19:52:04.522670 | orchestrator | 2025-06-22 19:52:04.522680 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:52:04.522700 | orchestrator | Sunday 22 June 2025 19:52:01 +0000 (0:00:02.667) 0:00:59.205 *********** 2025-06-22 19:52:04.522710 | orchestrator | =============================================================================== 2025-06-22 19:52:04.522719 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 17.41s 2025-06-22 19:52:04.522729 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.32s 2025-06-22 19:52:04.522738 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.04s 2025-06-22 19:52:04.522748 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.22s 2025-06-22 19:52:04.522757 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 3.09s 2025-06-22 19:52:04.522767 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.85s 2025-06-22 19:52:04.522777 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 2.67s 2025-06-22 19:52:04.522786 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.01s 2025-06-22 19:52:04.522796 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.93s 2025-06-22 19:52:04.522805 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.72s 2025-06-22 19:52:04.522815 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.65s 2025-06-22 19:52:04.522831 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.50s 2025-06-22 19:52:04.522845 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.41s 2025-06-22 19:52:04.522855 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.38s 2025-06-22 19:52:04.522865 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.34s 2025-06-22 19:52:04.522875 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.12s 2025-06-22 19:52:04.522885 | orchestrator | 2025-06-22 19:52:04 | INFO  | Task ddf1584c-37b6-4afc-9723-ef86286def88 is in state SUCCESS 2025-06-22 19:52:04.522895 | orchestrator | 2025-06-22 19:52:04 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:04.522905 | orchestrator | 2025-06-22 19:52:04 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:04.522915 | orchestrator | 2025-06-22 19:52:04 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:04.523224 | orchestrator | 2025-06-22 19:52:04 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:04.523242 | orchestrator | 2025-06-22 19:52:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:07.554895 | orchestrator | 2025-06-22 19:52:07 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:07.555924 | orchestrator | 2025-06-22 19:52:07 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:07.557485 | orchestrator | 2025-06-22 19:52:07 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:07.559198 | orchestrator | 2025-06-22 19:52:07 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:07.559227 | orchestrator | 2025-06-22 19:52:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:10.604133 | orchestrator | 2025-06-22 19:52:10 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:10.606092 | orchestrator | 2025-06-22 19:52:10 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:10.607058 | orchestrator | 2025-06-22 19:52:10 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:10.609955 | orchestrator | 2025-06-22 19:52:10 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:10.610178 | orchestrator | 2025-06-22 19:52:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:13.645874 | orchestrator | 2025-06-22 19:52:13 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:13.648414 | orchestrator | 2025-06-22 19:52:13 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:13.651398 | orchestrator | 2025-06-22 19:52:13 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:13.653403 | orchestrator | 2025-06-22 19:52:13 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:13.653432 | orchestrator | 2025-06-22 19:52:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:16.679208 | orchestrator | 2025-06-22 19:52:16 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:16.680869 | orchestrator | 2025-06-22 19:52:16 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:16.680901 | orchestrator | 2025-06-22 19:52:16 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:16.682099 | orchestrator | 2025-06-22 19:52:16 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:16.682127 | orchestrator | 2025-06-22 19:52:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:19.711687 | orchestrator | 2025-06-22 19:52:19 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:19.712004 | orchestrator | 2025-06-22 19:52:19 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:19.713348 | orchestrator | 2025-06-22 19:52:19 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:19.714739 | orchestrator | 2025-06-22 19:52:19 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:19.714781 | orchestrator | 2025-06-22 19:52:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:22.764434 | orchestrator | 2025-06-22 19:52:22 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:22.765884 | orchestrator | 2025-06-22 19:52:22 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:22.767245 | orchestrator | 2025-06-22 19:52:22 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:22.771062 | orchestrator | 2025-06-22 19:52:22 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:22.771461 | orchestrator | 2025-06-22 19:52:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:25.821600 | orchestrator | 2025-06-22 19:52:25 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:25.823529 | orchestrator | 2025-06-22 19:52:25 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:25.825237 | orchestrator | 2025-06-22 19:52:25 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:25.826970 | orchestrator | 2025-06-22 19:52:25 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:25.826996 | orchestrator | 2025-06-22 19:52:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:28.871935 | orchestrator | 2025-06-22 19:52:28 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:28.872650 | orchestrator | 2025-06-22 19:52:28 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:28.873659 | orchestrator | 2025-06-22 19:52:28 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:28.875011 | orchestrator | 2025-06-22 19:52:28 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:28.875044 | orchestrator | 2025-06-22 19:52:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:31.917013 | orchestrator | 2025-06-22 19:52:31 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:31.917570 | orchestrator | 2025-06-22 19:52:31 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:31.919133 | orchestrator | 2025-06-22 19:52:31 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:31.920064 | orchestrator | 2025-06-22 19:52:31 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:31.920446 | orchestrator | 2025-06-22 19:52:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:34.960693 | orchestrator | 2025-06-22 19:52:34 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:34.962642 | orchestrator | 2025-06-22 19:52:34 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:34.963472 | orchestrator | 2025-06-22 19:52:34 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:34.965111 | orchestrator | 2025-06-22 19:52:34 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:34.965126 | orchestrator | 2025-06-22 19:52:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:38.005853 | orchestrator | 2025-06-22 19:52:38 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:38.006452 | orchestrator | 2025-06-22 19:52:38 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:38.007113 | orchestrator | 2025-06-22 19:52:38 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:38.008021 | orchestrator | 2025-06-22 19:52:38 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:38.010157 | orchestrator | 2025-06-22 19:52:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:41.063713 | orchestrator | 2025-06-22 19:52:41 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:41.069134 | orchestrator | 2025-06-22 19:52:41 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:41.072323 | orchestrator | 2025-06-22 19:52:41 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:41.073603 | orchestrator | 2025-06-22 19:52:41 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:41.073630 | orchestrator | 2025-06-22 19:52:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:44.132584 | orchestrator | 2025-06-22 19:52:44 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:44.132788 | orchestrator | 2025-06-22 19:52:44 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:44.134757 | orchestrator | 2025-06-22 19:52:44 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state STARTED 2025-06-22 19:52:44.136147 | orchestrator | 2025-06-22 19:52:44 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:44.136406 | orchestrator | 2025-06-22 19:52:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:47.188744 | orchestrator | 2025-06-22 19:52:47 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:47.191559 | orchestrator | 2025-06-22 19:52:47 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:47.192917 | orchestrator | 2025-06-22 19:52:47 | INFO  | Task 78944cb7-3d86-4e26-ad10-e4f622f02a3c is in state SUCCESS 2025-06-22 19:52:47.194211 | orchestrator | 2025-06-22 19:52:47 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:47.194255 | orchestrator | 2025-06-22 19:52:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:50.234657 | orchestrator | 2025-06-22 19:52:50 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:50.235537 | orchestrator | 2025-06-22 19:52:50 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:50.238493 | orchestrator | 2025-06-22 19:52:50 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:50.238561 | orchestrator | 2025-06-22 19:52:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:53.282435 | orchestrator | 2025-06-22 19:52:53 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:53.286589 | orchestrator | 2025-06-22 19:52:53 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:53.288745 | orchestrator | 2025-06-22 19:52:53 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:53.288779 | orchestrator | 2025-06-22 19:52:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:56.327013 | orchestrator | 2025-06-22 19:52:56 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:56.327927 | orchestrator | 2025-06-22 19:52:56 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:56.330883 | orchestrator | 2025-06-22 19:52:56 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:56.330940 | orchestrator | 2025-06-22 19:52:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:59.370354 | orchestrator | 2025-06-22 19:52:59 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:52:59.371896 | orchestrator | 2025-06-22 19:52:59 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:52:59.373980 | orchestrator | 2025-06-22 19:52:59 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:52:59.374115 | orchestrator | 2025-06-22 19:52:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:02.425625 | orchestrator | 2025-06-22 19:53:02 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:02.428872 | orchestrator | 2025-06-22 19:53:02 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:53:02.431544 | orchestrator | 2025-06-22 19:53:02 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:02.431964 | orchestrator | 2025-06-22 19:53:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:05.467975 | orchestrator | 2025-06-22 19:53:05 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:05.468407 | orchestrator | 2025-06-22 19:53:05 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:53:05.469739 | orchestrator | 2025-06-22 19:53:05 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:05.469763 | orchestrator | 2025-06-22 19:53:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:08.508376 | orchestrator | 2025-06-22 19:53:08 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:08.510468 | orchestrator | 2025-06-22 19:53:08 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:53:08.511168 | orchestrator | 2025-06-22 19:53:08 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:08.511346 | orchestrator | 2025-06-22 19:53:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:11.547774 | orchestrator | 2025-06-22 19:53:11 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:11.549731 | orchestrator | 2025-06-22 19:53:11 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:53:11.551070 | orchestrator | 2025-06-22 19:53:11 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:11.551227 | orchestrator | 2025-06-22 19:53:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:14.595981 | orchestrator | 2025-06-22 19:53:14 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:14.599580 | orchestrator | 2025-06-22 19:53:14 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:53:14.601066 | orchestrator | 2025-06-22 19:53:14 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:14.601095 | orchestrator | 2025-06-22 19:53:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:17.657338 | orchestrator | 2025-06-22 19:53:17 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:17.657585 | orchestrator | 2025-06-22 19:53:17 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:53:17.659140 | orchestrator | 2025-06-22 19:53:17 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:17.659174 | orchestrator | 2025-06-22 19:53:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:20.716857 | orchestrator | 2025-06-22 19:53:20 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:20.718165 | orchestrator | 2025-06-22 19:53:20 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:53:20.722378 | orchestrator | 2025-06-22 19:53:20 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:20.722420 | orchestrator | 2025-06-22 19:53:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:23.763297 | orchestrator | 2025-06-22 19:53:23 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:23.764796 | orchestrator | 2025-06-22 19:53:23 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:53:23.766731 | orchestrator | 2025-06-22 19:53:23 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:23.766771 | orchestrator | 2025-06-22 19:53:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:26.813176 | orchestrator | 2025-06-22 19:53:26 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:26.813281 | orchestrator | 2025-06-22 19:53:26 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:53:26.814577 | orchestrator | 2025-06-22 19:53:26 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:26.814606 | orchestrator | 2025-06-22 19:53:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:29.859149 | orchestrator | 2025-06-22 19:53:29 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:29.859990 | orchestrator | 2025-06-22 19:53:29 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state STARTED 2025-06-22 19:53:29.861446 | orchestrator | 2025-06-22 19:53:29 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:29.861472 | orchestrator | 2025-06-22 19:53:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:32.904426 | orchestrator | 2025-06-22 19:53:32 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:32.913979 | orchestrator | 2025-06-22 19:53:32.914100 | orchestrator | 2025-06-22 19:53:32.914115 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-22 19:53:32.914126 | orchestrator | 2025-06-22 19:53:32.914136 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-22 19:53:32.914146 | orchestrator | Sunday 22 June 2025 19:51:24 +0000 (0:00:00.243) 0:00:00.243 *********** 2025-06-22 19:53:32.914157 | orchestrator | ok: [testbed-manager] 2025-06-22 19:53:32.914168 | orchestrator | 2025-06-22 19:53:32.914178 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-22 19:53:32.914188 | orchestrator | Sunday 22 June 2025 19:51:25 +0000 (0:00:00.966) 0:00:01.210 *********** 2025-06-22 19:53:32.914198 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-22 19:53:32.914207 | orchestrator | 2025-06-22 19:53:32.914218 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-22 19:53:32.914227 | orchestrator | Sunday 22 June 2025 19:51:25 +0000 (0:00:00.623) 0:00:01.834 *********** 2025-06-22 19:53:32.914237 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:32.914254 | orchestrator | 2025-06-22 19:53:32.914271 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-22 19:53:32.914298 | orchestrator | Sunday 22 June 2025 19:51:27 +0000 (0:00:01.907) 0:00:03.742 *********** 2025-06-22 19:53:32.914335 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-22 19:53:32.914352 | orchestrator | ok: [testbed-manager] 2025-06-22 19:53:32.914410 | orchestrator | 2025-06-22 19:53:32.914427 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-22 19:53:32.914438 | orchestrator | Sunday 22 June 2025 19:52:41 +0000 (0:01:14.019) 0:01:17.762 *********** 2025-06-22 19:53:32.914448 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:32.914458 | orchestrator | 2025-06-22 19:53:32.914468 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:53:32.914478 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:53:32.914489 | orchestrator | 2025-06-22 19:53:32.914499 | orchestrator | 2025-06-22 19:53:32.914508 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:53:32.914518 | orchestrator | Sunday 22 June 2025 19:52:45 +0000 (0:00:03.632) 0:01:21.394 *********** 2025-06-22 19:53:32.914528 | orchestrator | =============================================================================== 2025-06-22 19:53:32.914537 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 74.02s 2025-06-22 19:53:32.914547 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.63s 2025-06-22 19:53:32.914557 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.91s 2025-06-22 19:53:32.914567 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.97s 2025-06-22 19:53:32.914576 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.62s 2025-06-22 19:53:32.914606 | orchestrator | 2025-06-22 19:53:32.914617 | orchestrator | 2025-06-22 19:53:32.914627 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-22 19:53:32.914638 | orchestrator | 2025-06-22 19:53:32.914648 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-22 19:53:32.914659 | orchestrator | Sunday 22 June 2025 19:50:57 +0000 (0:00:00.257) 0:00:00.257 *********** 2025-06-22 19:53:32.914671 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:53:32.914683 | orchestrator | 2025-06-22 19:53:32.914694 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-22 19:53:32.914705 | orchestrator | Sunday 22 June 2025 19:50:58 +0000 (0:00:01.240) 0:00:01.497 *********** 2025-06-22 19:53:32.914715 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:53:32.914726 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:53:32.914737 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:53:32.914748 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:53:32.914758 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:53:32.914769 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:53:32.914779 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:53:32.914790 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:53:32.914800 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:53:32.914811 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:53:32.914821 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:53:32.914832 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:53:32.914843 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:53:32.914854 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:53:32.914864 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:53:32.914875 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:53:32.914902 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:53:32.914913 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:53:32.914924 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:53:32.914935 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:53:32.914946 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:53:32.914957 | orchestrator | 2025-06-22 19:53:32.914968 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-22 19:53:32.914978 | orchestrator | Sunday 22 June 2025 19:51:02 +0000 (0:00:04.264) 0:00:05.762 *********** 2025-06-22 19:53:32.914988 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:53:32.914999 | orchestrator | 2025-06-22 19:53:32.915015 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-22 19:53:32.915026 | orchestrator | Sunday 22 June 2025 19:51:04 +0000 (0:00:01.530) 0:00:07.292 *********** 2025-06-22 19:53:32.915045 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.915060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.915071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.915081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.915091 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.915108 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915119 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.915133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915148 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.915159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915179 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915190 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915235 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915282 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915292 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.915344 | orchestrator | 2025-06-22 19:53:32.915354 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-22 19:53:32.915364 | orchestrator | Sunday 22 June 2025 19:51:08 +0000 (0:00:04.783) 0:00:12.075 *********** 2025-06-22 19:53:32.915380 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915402 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915412 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915422 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:53:32.915433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915463 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:53:32.915474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915516 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:53:32.915529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915560 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:53:32.915570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915590 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915647 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:53:32.915657 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:53:32.915666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915697 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:53:32.915706 | orchestrator | 2025-06-22 19:53:32.915716 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-22 19:53:32.915726 | orchestrator | Sunday 22 June 2025 19:51:10 +0000 (0:00:01.282) 0:00:13.358 *********** 2025-06-22 19:53:32.915736 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915746 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915769 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915780 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:53:32.915794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915824 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:53:32.915834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915905 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:53:32.915919 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915930 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915940 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.915950 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:53:32.915960 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:53:32.915970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.915986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.916002 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.916012 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:53:32.916022 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:53:32.916035 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.916045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.916055 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:53:32.916065 | orchestrator | 2025-06-22 19:53:32.916075 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-22 19:53:32.916085 | orchestrator | Sunday 22 June 2025 19:51:12 +0000 (0:00:02.655) 0:00:16.013 *********** 2025-06-22 19:53:32.916094 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:53:32.916104 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:53:32.916113 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:53:32.916123 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:53:32.916133 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:53:32.916142 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:53:32.916151 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:53:32.916161 | orchestrator | 2025-06-22 19:53:32.916170 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-22 19:53:32.916180 | orchestrator | Sunday 22 June 2025 19:51:13 +0000 (0:00:00.772) 0:00:16.786 *********** 2025-06-22 19:53:32.916195 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:53:32.916204 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:53:32.916214 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:53:32.916223 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:53:32.916233 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:53:32.916242 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:53:32.916252 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:53:32.916261 | orchestrator | 2025-06-22 19:53:32.916271 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-22 19:53:32.916280 | orchestrator | Sunday 22 June 2025 19:51:14 +0000 (0:00:01.253) 0:00:18.040 *********** 2025-06-22 19:53:32.916290 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.916300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.916363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.916374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.916391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.916401 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916417 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.916427 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.916437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916478 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916489 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916505 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916515 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.916601 | orchestrator | 2025-06-22 19:53:32.916616 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-22 19:53:32.916626 | orchestrator | Sunday 22 June 2025 19:51:19 +0000 (0:00:05.029) 0:00:23.069 *********** 2025-06-22 19:53:32.916635 | orchestrator | [WARNING]: Skipped 2025-06-22 19:53:32.916645 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-22 19:53:32.916655 | orchestrator | to this access issue: 2025-06-22 19:53:32.916664 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-22 19:53:32.916674 | orchestrator | directory 2025-06-22 19:53:32.916684 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:53:32.916693 | orchestrator | 2025-06-22 19:53:32.916702 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-22 19:53:32.916712 | orchestrator | Sunday 22 June 2025 19:51:21 +0000 (0:00:01.769) 0:00:24.838 *********** 2025-06-22 19:53:32.916722 | orchestrator | [WARNING]: Skipped 2025-06-22 19:53:32.916731 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-22 19:53:32.916740 | orchestrator | to this access issue: 2025-06-22 19:53:32.916750 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-22 19:53:32.916760 | orchestrator | directory 2025-06-22 19:53:32.916769 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:53:32.916778 | orchestrator | 2025-06-22 19:53:32.916788 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-22 19:53:32.916797 | orchestrator | Sunday 22 June 2025 19:51:22 +0000 (0:00:01.222) 0:00:26.061 *********** 2025-06-22 19:53:32.916807 | orchestrator | [WARNING]: Skipped 2025-06-22 19:53:32.916816 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-22 19:53:32.916825 | orchestrator | to this access issue: 2025-06-22 19:53:32.916835 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-22 19:53:32.916844 | orchestrator | directory 2025-06-22 19:53:32.916854 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:53:32.916863 | orchestrator | 2025-06-22 19:53:32.916873 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-22 19:53:32.916882 | orchestrator | Sunday 22 June 2025 19:51:24 +0000 (0:00:01.170) 0:00:27.231 *********** 2025-06-22 19:53:32.916895 | orchestrator | [WARNING]: Skipped 2025-06-22 19:53:32.916912 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-22 19:53:32.916927 | orchestrator | to this access issue: 2025-06-22 19:53:32.916943 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-22 19:53:32.916958 | orchestrator | directory 2025-06-22 19:53:32.916973 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:53:32.916988 | orchestrator | 2025-06-22 19:53:32.917005 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-22 19:53:32.917022 | orchestrator | Sunday 22 June 2025 19:51:24 +0000 (0:00:00.818) 0:00:28.050 *********** 2025-06-22 19:53:32.917039 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:32.917055 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:32.917072 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:53:32.917089 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:32.917098 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:32.917108 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:53:32.917117 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:53:32.917127 | orchestrator | 2025-06-22 19:53:32.917136 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-22 19:53:32.917146 | orchestrator | Sunday 22 June 2025 19:51:29 +0000 (0:00:04.589) 0:00:32.640 *********** 2025-06-22 19:53:32.917155 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:53:32.917178 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:53:32.917197 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:53:32.917213 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:53:32.917223 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:53:32.917233 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:53:32.917242 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:53:32.917252 | orchestrator | 2025-06-22 19:53:32.917261 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-22 19:53:32.917271 | orchestrator | Sunday 22 June 2025 19:51:33 +0000 (0:00:03.655) 0:00:36.295 *********** 2025-06-22 19:53:32.917280 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:32.917290 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:32.917299 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:32.917330 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:32.917340 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:53:32.917355 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:53:32.917364 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:53:32.917374 | orchestrator | 2025-06-22 19:53:32.917383 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-22 19:53:32.917393 | orchestrator | Sunday 22 June 2025 19:51:36 +0000 (0:00:03.229) 0:00:39.525 *********** 2025-06-22 19:53:32.917404 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.917415 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.917425 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.917435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.917446 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.917472 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.917483 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.917497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.917507 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.917517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.917527 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.917538 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.917553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.917569 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.917579 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.917593 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.917603 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.917613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.917624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': 2025-06-22 19:53:32 | INFO  | Task 7d9dc36c-574c-47a6-9227-ebb1ff109a1c is in state SUCCESS 2025-06-22 19:53:32.917781 | orchestrator | True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:53:32.917807 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.917818 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.917828 | orchestrator | 2025-06-22 19:53:32.917838 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-22 19:53:32.917848 | orchestrator | Sunday 22 June 2025 19:51:39 +0000 (0:00:03.115) 0:00:42.641 *********** 2025-06-22 19:53:32.917858 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:53:32.917867 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:53:32.917877 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:53:32.917887 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:53:32.917896 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:53:32.917906 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:53:32.917916 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:53:32.917925 | orchestrator | 2025-06-22 19:53:32.917935 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-22 19:53:32.917945 | orchestrator | Sunday 22 June 2025 19:51:41 +0000 (0:00:02.376) 0:00:45.017 *********** 2025-06-22 19:53:32.917955 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:53:32.917965 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:53:32.917975 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:53:32.917984 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:53:32.917994 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:53:32.918004 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:53:32.918013 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:53:32.918064 | orchestrator | 2025-06-22 19:53:32.918074 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-22 19:53:32.918089 | orchestrator | Sunday 22 June 2025 19:51:43 +0000 (0:00:01.909) 0:00:46.927 *********** 2025-06-22 19:53:32.918100 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.918110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.918140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.918151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.918161 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918172 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.918189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918200 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.918210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:53:32.918248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918265 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918282 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918324 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918388 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:53:32.918452 | orchestrator | 2025-06-22 19:53:32.918462 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-22 19:53:32.918472 | orchestrator | Sunday 22 June 2025 19:51:46 +0000 (0:00:02.692) 0:00:49.620 *********** 2025-06-22 19:53:32.918481 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:32.918491 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:32.918500 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:32.918510 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:32.918519 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:53:32.918529 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:53:32.918538 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:53:32.918547 | orchestrator | 2025-06-22 19:53:32.918557 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-22 19:53:32.918567 | orchestrator | Sunday 22 June 2025 19:51:48 +0000 (0:00:01.629) 0:00:51.250 *********** 2025-06-22 19:53:32.918576 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:32.918585 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:32.918595 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:32.918608 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:32.918618 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:53:32.918627 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:53:32.918637 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:53:32.918652 | orchestrator | 2025-06-22 19:53:32.918662 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:53:32.918671 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:01.316) 0:00:52.566 *********** 2025-06-22 19:53:32.918681 | orchestrator | 2025-06-22 19:53:32.918691 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:53:32.918700 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:00.213) 0:00:52.779 *********** 2025-06-22 19:53:32.918709 | orchestrator | 2025-06-22 19:53:32.918719 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:53:32.918728 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:00.052) 0:00:52.832 *********** 2025-06-22 19:53:32.918737 | orchestrator | 2025-06-22 19:53:32.918747 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:53:32.918756 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:00.053) 0:00:52.886 *********** 2025-06-22 19:53:32.918766 | orchestrator | 2025-06-22 19:53:32.918775 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:53:32.918785 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:00.050) 0:00:52.936 *********** 2025-06-22 19:53:32.918794 | orchestrator | 2025-06-22 19:53:32.918803 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:53:32.918813 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:00.049) 0:00:52.986 *********** 2025-06-22 19:53:32.918822 | orchestrator | 2025-06-22 19:53:32.918832 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:53:32.918841 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:00.049) 0:00:53.035 *********** 2025-06-22 19:53:32.918850 | orchestrator | 2025-06-22 19:53:32.918860 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-22 19:53:32.918869 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:00.067) 0:00:53.103 *********** 2025-06-22 19:53:32.918879 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:32.918888 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:32.918898 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:53:32.918907 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:53:32.918917 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:32.918926 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:53:32.918935 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:32.918945 | orchestrator | 2025-06-22 19:53:32.918954 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-22 19:53:32.918964 | orchestrator | Sunday 22 June 2025 19:52:36 +0000 (0:00:46.248) 0:01:39.352 *********** 2025-06-22 19:53:32.918974 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:32.918988 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:32.918998 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:53:32.919007 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:32.919017 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:53:32.919026 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:53:32.919035 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:32.919045 | orchestrator | 2025-06-22 19:53:32.919054 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-22 19:53:32.919064 | orchestrator | Sunday 22 June 2025 19:53:24 +0000 (0:00:48.463) 0:02:27.815 *********** 2025-06-22 19:53:32.919073 | orchestrator | ok: [testbed-manager] 2025-06-22 19:53:32.919083 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:53:32.919092 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:53:32.919102 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:53:32.919111 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:53:32.919121 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:53:32.919130 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:53:32.919139 | orchestrator | 2025-06-22 19:53:32.919149 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-22 19:53:32.919158 | orchestrator | Sunday 22 June 2025 19:53:26 +0000 (0:00:02.235) 0:02:30.051 *********** 2025-06-22 19:53:32.919173 | orchestrator | changed: [testbed-manager] 2025-06-22 19:53:32.919183 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:32.919192 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:32.919201 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:53:32.919211 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:53:32.919220 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:32.919229 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:53:32.919239 | orchestrator | 2025-06-22 19:53:32.919248 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:53:32.919258 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:53:32.919268 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:53:32.919278 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:53:32.919287 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:53:32.919297 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:53:32.919324 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:53:32.919339 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:53:32.919349 | orchestrator | 2025-06-22 19:53:32.919358 | orchestrator | 2025-06-22 19:53:32.919368 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:53:32.919377 | orchestrator | Sunday 22 June 2025 19:53:32 +0000 (0:00:05.100) 0:02:35.151 *********** 2025-06-22 19:53:32.919387 | orchestrator | =============================================================================== 2025-06-22 19:53:32.919396 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 48.46s 2025-06-22 19:53:32.919406 | orchestrator | common : Restart fluentd container ------------------------------------- 46.25s 2025-06-22 19:53:32.919415 | orchestrator | common : Restart cron container ----------------------------------------- 5.10s 2025-06-22 19:53:32.919424 | orchestrator | common : Copying over config.json files for services -------------------- 5.03s 2025-06-22 19:53:32.919434 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.78s 2025-06-22 19:53:32.919445 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.59s 2025-06-22 19:53:32.919462 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.26s 2025-06-22 19:53:32.919478 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.66s 2025-06-22 19:53:32.919493 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.23s 2025-06-22 19:53:32.919509 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.12s 2025-06-22 19:53:32.919523 | orchestrator | common : Check common containers ---------------------------------------- 2.69s 2025-06-22 19:53:32.919540 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.66s 2025-06-22 19:53:32.919558 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.38s 2025-06-22 19:53:32.919575 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.24s 2025-06-22 19:53:32.919590 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.91s 2025-06-22 19:53:32.919600 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.77s 2025-06-22 19:53:32.919609 | orchestrator | common : Creating log volume -------------------------------------------- 1.63s 2025-06-22 19:53:32.919626 | orchestrator | common : include_tasks -------------------------------------------------- 1.53s 2025-06-22 19:53:32.919636 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.32s 2025-06-22 19:53:32.919645 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.28s 2025-06-22 19:53:32.919660 | orchestrator | 2025-06-22 19:53:32 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:32.919671 | orchestrator | 2025-06-22 19:53:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:35.968175 | orchestrator | 2025-06-22 19:53:35 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:35.968639 | orchestrator | 2025-06-22 19:53:35 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:53:35.971669 | orchestrator | 2025-06-22 19:53:35 | INFO  | Task 9e95d598-53dd-45d9-997e-66f5227fdb39 is in state STARTED 2025-06-22 19:53:35.972368 | orchestrator | 2025-06-22 19:53:35 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:53:35.978411 | orchestrator | 2025-06-22 19:53:35 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:35.978461 | orchestrator | 2025-06-22 19:53:35 | INFO  | Task 18800d23-c5a2-4d44-9438-b7d876bbe72e is in state STARTED 2025-06-22 19:53:35.978479 | orchestrator | 2025-06-22 19:53:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:39.003708 | orchestrator | 2025-06-22 19:53:38 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:39.009471 | orchestrator | 2025-06-22 19:53:39 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:53:39.010086 | orchestrator | 2025-06-22 19:53:39 | INFO  | Task 9e95d598-53dd-45d9-997e-66f5227fdb39 is in state STARTED 2025-06-22 19:53:39.015341 | orchestrator | 2025-06-22 19:53:39 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:53:39.021449 | orchestrator | 2025-06-22 19:53:39 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:39.025469 | orchestrator | 2025-06-22 19:53:39 | INFO  | Task 18800d23-c5a2-4d44-9438-b7d876bbe72e is in state STARTED 2025-06-22 19:53:39.025503 | orchestrator | 2025-06-22 19:53:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:42.060198 | orchestrator | 2025-06-22 19:53:42 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:42.061239 | orchestrator | 2025-06-22 19:53:42 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:53:42.062120 | orchestrator | 2025-06-22 19:53:42 | INFO  | Task 9e95d598-53dd-45d9-997e-66f5227fdb39 is in state STARTED 2025-06-22 19:53:42.062810 | orchestrator | 2025-06-22 19:53:42 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:53:42.064258 | orchestrator | 2025-06-22 19:53:42 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:42.067962 | orchestrator | 2025-06-22 19:53:42 | INFO  | Task 18800d23-c5a2-4d44-9438-b7d876bbe72e is in state STARTED 2025-06-22 19:53:42.067990 | orchestrator | 2025-06-22 19:53:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:45.101884 | orchestrator | 2025-06-22 19:53:45 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:45.102342 | orchestrator | 2025-06-22 19:53:45 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:53:45.103090 | orchestrator | 2025-06-22 19:53:45 | INFO  | Task 9e95d598-53dd-45d9-997e-66f5227fdb39 is in state STARTED 2025-06-22 19:53:45.103849 | orchestrator | 2025-06-22 19:53:45 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:53:45.107105 | orchestrator | 2025-06-22 19:53:45 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:45.107176 | orchestrator | 2025-06-22 19:53:45 | INFO  | Task 18800d23-c5a2-4d44-9438-b7d876bbe72e is in state STARTED 2025-06-22 19:53:45.107191 | orchestrator | 2025-06-22 19:53:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:48.142264 | orchestrator | 2025-06-22 19:53:48 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:48.142630 | orchestrator | 2025-06-22 19:53:48 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:53:48.145266 | orchestrator | 2025-06-22 19:53:48 | INFO  | Task 9e95d598-53dd-45d9-997e-66f5227fdb39 is in state STARTED 2025-06-22 19:53:48.148286 | orchestrator | 2025-06-22 19:53:48 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:53:48.148320 | orchestrator | 2025-06-22 19:53:48 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:48.148331 | orchestrator | 2025-06-22 19:53:48 | INFO  | Task 18800d23-c5a2-4d44-9438-b7d876bbe72e is in state STARTED 2025-06-22 19:53:48.148340 | orchestrator | 2025-06-22 19:53:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:51.177598 | orchestrator | 2025-06-22 19:53:51 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:51.178339 | orchestrator | 2025-06-22 19:53:51 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:53:51.178705 | orchestrator | 2025-06-22 19:53:51 | INFO  | Task 9e95d598-53dd-45d9-997e-66f5227fdb39 is in state STARTED 2025-06-22 19:53:51.179577 | orchestrator | 2025-06-22 19:53:51 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:53:51.180023 | orchestrator | 2025-06-22 19:53:51 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:51.180931 | orchestrator | 2025-06-22 19:53:51 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:53:51.181425 | orchestrator | 2025-06-22 19:53:51 | INFO  | Task 18800d23-c5a2-4d44-9438-b7d876bbe72e is in state SUCCESS 2025-06-22 19:53:51.181453 | orchestrator | 2025-06-22 19:53:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:54.206833 | orchestrator | 2025-06-22 19:53:54 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:54.206921 | orchestrator | 2025-06-22 19:53:54 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:53:54.208541 | orchestrator | 2025-06-22 19:53:54 | INFO  | Task 9e95d598-53dd-45d9-997e-66f5227fdb39 is in state STARTED 2025-06-22 19:53:54.208970 | orchestrator | 2025-06-22 19:53:54 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:53:54.209981 | orchestrator | 2025-06-22 19:53:54 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:54.210010 | orchestrator | 2025-06-22 19:53:54 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:53:54.210071 | orchestrator | 2025-06-22 19:53:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:57.240019 | orchestrator | 2025-06-22 19:53:57 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:53:57.242258 | orchestrator | 2025-06-22 19:53:57 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:53:57.243032 | orchestrator | 2025-06-22 19:53:57 | INFO  | Task 9e95d598-53dd-45d9-997e-66f5227fdb39 is in state STARTED 2025-06-22 19:53:57.244196 | orchestrator | 2025-06-22 19:53:57 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:53:57.244806 | orchestrator | 2025-06-22 19:53:57 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:53:57.246458 | orchestrator | 2025-06-22 19:53:57 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:53:57.246490 | orchestrator | 2025-06-22 19:53:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:00.285624 | orchestrator | 2025-06-22 19:54:00 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:00.287927 | orchestrator | 2025-06-22 19:54:00 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:00.289546 | orchestrator | 2025-06-22 19:54:00 | INFO  | Task 9e95d598-53dd-45d9-997e-66f5227fdb39 is in state STARTED 2025-06-22 19:54:00.291014 | orchestrator | 2025-06-22 19:54:00 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:00.291240 | orchestrator | 2025-06-22 19:54:00 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:00.291868 | orchestrator | 2025-06-22 19:54:00 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:00.291975 | orchestrator | 2025-06-22 19:54:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:03.329036 | orchestrator | 2025-06-22 19:54:03 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:03.329425 | orchestrator | 2025-06-22 19:54:03 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:03.330214 | orchestrator | 2025-06-22 19:54:03 | INFO  | Task 9e95d598-53dd-45d9-997e-66f5227fdb39 is in state STARTED 2025-06-22 19:54:03.330725 | orchestrator | 2025-06-22 19:54:03 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:03.332085 | orchestrator | 2025-06-22 19:54:03 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:03.332562 | orchestrator | 2025-06-22 19:54:03 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:03.332584 | orchestrator | 2025-06-22 19:54:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:06.362008 | orchestrator | 2025-06-22 19:54:06 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:06.363927 | orchestrator | 2025-06-22 19:54:06 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:06.365708 | orchestrator | 2025-06-22 19:54:06 | INFO  | Task 9e95d598-53dd-45d9-997e-66f5227fdb39 is in state STARTED 2025-06-22 19:54:06.368380 | orchestrator | 2025-06-22 19:54:06 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:06.370839 | orchestrator | 2025-06-22 19:54:06 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:06.372102 | orchestrator | 2025-06-22 19:54:06 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:06.372147 | orchestrator | 2025-06-22 19:54:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:09.411456 | orchestrator | 2025-06-22 19:54:09 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:09.412502 | orchestrator | 2025-06-22 19:54:09 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:09.413732 | orchestrator | 2025-06-22 19:54:09 | INFO  | Task 9e95d598-53dd-45d9-997e-66f5227fdb39 is in state SUCCESS 2025-06-22 19:54:09.414911 | orchestrator | 2025-06-22 19:54:09.414943 | orchestrator | 2025-06-22 19:54:09.414955 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:54:09.414968 | orchestrator | 2025-06-22 19:54:09.414980 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:54:09.414991 | orchestrator | Sunday 22 June 2025 19:53:38 +0000 (0:00:00.206) 0:00:00.206 *********** 2025-06-22 19:54:09.415003 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:09.415015 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:09.415026 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:09.415037 | orchestrator | 2025-06-22 19:54:09.415048 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:54:09.415060 | orchestrator | Sunday 22 June 2025 19:53:38 +0000 (0:00:00.239) 0:00:00.446 *********** 2025-06-22 19:54:09.415072 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-22 19:54:09.415084 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-22 19:54:09.415095 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-22 19:54:09.415106 | orchestrator | 2025-06-22 19:54:09.415133 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-22 19:54:09.415145 | orchestrator | 2025-06-22 19:54:09.415156 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-22 19:54:09.415168 | orchestrator | Sunday 22 June 2025 19:53:39 +0000 (0:00:00.774) 0:00:01.220 *********** 2025-06-22 19:54:09.415179 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:54:09.415190 | orchestrator | 2025-06-22 19:54:09.415201 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-22 19:54:09.415213 | orchestrator | Sunday 22 June 2025 19:53:39 +0000 (0:00:00.834) 0:00:02.055 *********** 2025-06-22 19:54:09.415224 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-22 19:54:09.415236 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-22 19:54:09.415247 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-22 19:54:09.415258 | orchestrator | 2025-06-22 19:54:09.415269 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-22 19:54:09.415280 | orchestrator | Sunday 22 June 2025 19:53:40 +0000 (0:00:01.108) 0:00:03.163 *********** 2025-06-22 19:54:09.415291 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-22 19:54:09.415302 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-22 19:54:09.415344 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-22 19:54:09.415356 | orchestrator | 2025-06-22 19:54:09.415367 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-22 19:54:09.415377 | orchestrator | Sunday 22 June 2025 19:53:43 +0000 (0:00:02.750) 0:00:05.913 *********** 2025-06-22 19:54:09.415388 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:09.415399 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:09.415410 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:09.415420 | orchestrator | 2025-06-22 19:54:09.415431 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-22 19:54:09.415475 | orchestrator | Sunday 22 June 2025 19:53:46 +0000 (0:00:02.844) 0:00:08.758 *********** 2025-06-22 19:54:09.415487 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:09.415497 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:09.415508 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:09.415520 | orchestrator | 2025-06-22 19:54:09.415538 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:54:09.415556 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:54:09.415595 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:54:09.415613 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:54:09.415631 | orchestrator | 2025-06-22 19:54:09.415642 | orchestrator | 2025-06-22 19:54:09.415653 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:54:09.415663 | orchestrator | Sunday 22 June 2025 19:53:49 +0000 (0:00:02.891) 0:00:11.650 *********** 2025-06-22 19:54:09.415674 | orchestrator | =============================================================================== 2025-06-22 19:54:09.415685 | orchestrator | memcached : Restart memcached container --------------------------------- 2.89s 2025-06-22 19:54:09.415696 | orchestrator | memcached : Check memcached container ----------------------------------- 2.84s 2025-06-22 19:54:09.415706 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.75s 2025-06-22 19:54:09.415717 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.11s 2025-06-22 19:54:09.415728 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.83s 2025-06-22 19:54:09.415738 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2025-06-22 19:54:09.415749 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.24s 2025-06-22 19:54:09.415759 | orchestrator | 2025-06-22 19:54:09.415770 | orchestrator | 2025-06-22 19:54:09.415781 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:54:09.415792 | orchestrator | 2025-06-22 19:54:09.415802 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:54:09.415813 | orchestrator | Sunday 22 June 2025 19:53:39 +0000 (0:00:00.631) 0:00:00.631 *********** 2025-06-22 19:54:09.415824 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:09.415835 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:09.415845 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:09.415856 | orchestrator | 2025-06-22 19:54:09.415867 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:54:09.415891 | orchestrator | Sunday 22 June 2025 19:53:39 +0000 (0:00:00.428) 0:00:01.059 *********** 2025-06-22 19:54:09.415903 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-22 19:54:09.415914 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-22 19:54:09.415925 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-22 19:54:09.415936 | orchestrator | 2025-06-22 19:54:09.415947 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-22 19:54:09.415958 | orchestrator | 2025-06-22 19:54:09.415969 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-22 19:54:09.415979 | orchestrator | Sunday 22 June 2025 19:53:40 +0000 (0:00:00.698) 0:00:01.758 *********** 2025-06-22 19:54:09.415990 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:54:09.416001 | orchestrator | 2025-06-22 19:54:09.416011 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-22 19:54:09.416022 | orchestrator | Sunday 22 June 2025 19:53:41 +0000 (0:00:01.083) 0:00:02.841 *********** 2025-06-22 19:54:09.416043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416124 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416136 | orchestrator | 2025-06-22 19:54:09.416147 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-22 19:54:09.416159 | orchestrator | Sunday 22 June 2025 19:53:43 +0000 (0:00:02.352) 0:00:05.193 *********** 2025-06-22 19:54:09.416170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416249 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416260 | orchestrator | 2025-06-22 19:54:09.416271 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-22 19:54:09.416283 | orchestrator | Sunday 22 June 2025 19:53:47 +0000 (0:00:03.633) 0:00:08.827 *********** 2025-06-22 19:54:09.416299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416430 | orchestrator | 2025-06-22 19:54:09.416447 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-22 19:54:09.416459 | orchestrator | Sunday 22 June 2025 19:53:50 +0000 (0:00:03.410) 0:00:12.237 *********** 2025-06-22 19:54:09.416470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:54:09.416551 | orchestrator | 2025-06-22 19:54:09.416561 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-22 19:54:09.416572 | orchestrator | Sunday 22 June 2025 19:53:52 +0000 (0:00:01.684) 0:00:13.921 *********** 2025-06-22 19:54:09.416583 | orchestrator | 2025-06-22 19:54:09.416594 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-22 19:54:09.416611 | orchestrator | Sunday 22 June 2025 19:53:52 +0000 (0:00:00.095) 0:00:14.017 *********** 2025-06-22 19:54:09.416622 | orchestrator | 2025-06-22 19:54:09.416633 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-22 19:54:09.416644 | orchestrator | Sunday 22 June 2025 19:53:52 +0000 (0:00:00.059) 0:00:14.077 *********** 2025-06-22 19:54:09.416661 | orchestrator | 2025-06-22 19:54:09.416672 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-22 19:54:09.416683 | orchestrator | Sunday 22 June 2025 19:53:52 +0000 (0:00:00.060) 0:00:14.138 *********** 2025-06-22 19:54:09.416693 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:09.416704 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:09.416715 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:09.416726 | orchestrator | 2025-06-22 19:54:09.416736 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-22 19:54:09.416747 | orchestrator | Sunday 22 June 2025 19:54:02 +0000 (0:00:09.263) 0:00:23.402 *********** 2025-06-22 19:54:09.416758 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:09.416768 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:09.416784 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:09.416795 | orchestrator | 2025-06-22 19:54:09.416806 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:54:09.416817 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:54:09.416828 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:54:09.416838 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:54:09.416849 | orchestrator | 2025-06-22 19:54:09.416860 | orchestrator | 2025-06-22 19:54:09.416870 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:54:09.416881 | orchestrator | Sunday 22 June 2025 19:54:07 +0000 (0:00:05.295) 0:00:28.698 *********** 2025-06-22 19:54:09.416892 | orchestrator | =============================================================================== 2025-06-22 19:54:09.416903 | orchestrator | redis : Restart redis container ----------------------------------------- 9.26s 2025-06-22 19:54:09.416913 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.30s 2025-06-22 19:54:09.416924 | orchestrator | redis : Copying over default config.json files -------------------------- 3.63s 2025-06-22 19:54:09.416934 | orchestrator | redis : Copying over redis config files --------------------------------- 3.41s 2025-06-22 19:54:09.416945 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.35s 2025-06-22 19:54:09.416955 | orchestrator | redis : Check redis containers ------------------------------------------ 1.69s 2025-06-22 19:54:09.416966 | orchestrator | redis : include_tasks --------------------------------------------------- 1.08s 2025-06-22 19:54:09.416977 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2025-06-22 19:54:09.416987 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-06-22 19:54:09.416998 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.22s 2025-06-22 19:54:09.417009 | orchestrator | 2025-06-22 19:54:09 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:09.417998 | orchestrator | 2025-06-22 19:54:09 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:09.418798 | orchestrator | 2025-06-22 19:54:09 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:09.418829 | orchestrator | 2025-06-22 19:54:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:12.452093 | orchestrator | 2025-06-22 19:54:12 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:12.453001 | orchestrator | 2025-06-22 19:54:12 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:12.453558 | orchestrator | 2025-06-22 19:54:12 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:12.454514 | orchestrator | 2025-06-22 19:54:12 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:12.455337 | orchestrator | 2025-06-22 19:54:12 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:12.455365 | orchestrator | 2025-06-22 19:54:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:15.491798 | orchestrator | 2025-06-22 19:54:15 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:15.491877 | orchestrator | 2025-06-22 19:54:15 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:15.492239 | orchestrator | 2025-06-22 19:54:15 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:15.492936 | orchestrator | 2025-06-22 19:54:15 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:15.493714 | orchestrator | 2025-06-22 19:54:15 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:15.493736 | orchestrator | 2025-06-22 19:54:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:18.518362 | orchestrator | 2025-06-22 19:54:18 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:18.518615 | orchestrator | 2025-06-22 19:54:18 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:18.520091 | orchestrator | 2025-06-22 19:54:18 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:18.520712 | orchestrator | 2025-06-22 19:54:18 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:18.521468 | orchestrator | 2025-06-22 19:54:18 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:18.521510 | orchestrator | 2025-06-22 19:54:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:21.547295 | orchestrator | 2025-06-22 19:54:21 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:21.547415 | orchestrator | 2025-06-22 19:54:21 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:21.550295 | orchestrator | 2025-06-22 19:54:21 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:21.550943 | orchestrator | 2025-06-22 19:54:21 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:21.551748 | orchestrator | 2025-06-22 19:54:21 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:21.551774 | orchestrator | 2025-06-22 19:54:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:24.586081 | orchestrator | 2025-06-22 19:54:24 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:24.586435 | orchestrator | 2025-06-22 19:54:24 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:24.588420 | orchestrator | 2025-06-22 19:54:24 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:24.592566 | orchestrator | 2025-06-22 19:54:24 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:24.593331 | orchestrator | 2025-06-22 19:54:24 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:24.593358 | orchestrator | 2025-06-22 19:54:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:27.629251 | orchestrator | 2025-06-22 19:54:27 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:27.631596 | orchestrator | 2025-06-22 19:54:27 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:27.634340 | orchestrator | 2025-06-22 19:54:27 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:27.637276 | orchestrator | 2025-06-22 19:54:27 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:27.637301 | orchestrator | 2025-06-22 19:54:27 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:27.637342 | orchestrator | 2025-06-22 19:54:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:30.668817 | orchestrator | 2025-06-22 19:54:30 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:30.672481 | orchestrator | 2025-06-22 19:54:30 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:30.672529 | orchestrator | 2025-06-22 19:54:30 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:30.674424 | orchestrator | 2025-06-22 19:54:30 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:30.674457 | orchestrator | 2025-06-22 19:54:30 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:30.674469 | orchestrator | 2025-06-22 19:54:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:33.705887 | orchestrator | 2025-06-22 19:54:33 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:33.705973 | orchestrator | 2025-06-22 19:54:33 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:33.706250 | orchestrator | 2025-06-22 19:54:33 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:33.707078 | orchestrator | 2025-06-22 19:54:33 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:33.707728 | orchestrator | 2025-06-22 19:54:33 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:33.710292 | orchestrator | 2025-06-22 19:54:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:36.749605 | orchestrator | 2025-06-22 19:54:36 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:36.752286 | orchestrator | 2025-06-22 19:54:36 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:36.755106 | orchestrator | 2025-06-22 19:54:36 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:36.757534 | orchestrator | 2025-06-22 19:54:36 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:36.760709 | orchestrator | 2025-06-22 19:54:36 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:36.761417 | orchestrator | 2025-06-22 19:54:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:39.808104 | orchestrator | 2025-06-22 19:54:39 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:39.808204 | orchestrator | 2025-06-22 19:54:39 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:39.811797 | orchestrator | 2025-06-22 19:54:39 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:39.812976 | orchestrator | 2025-06-22 19:54:39 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:39.815921 | orchestrator | 2025-06-22 19:54:39 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:39.815972 | orchestrator | 2025-06-22 19:54:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:42.866821 | orchestrator | 2025-06-22 19:54:42 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:42.872219 | orchestrator | 2025-06-22 19:54:42 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state STARTED 2025-06-22 19:54:42.872838 | orchestrator | 2025-06-22 19:54:42 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:42.875436 | orchestrator | 2025-06-22 19:54:42 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:42.875913 | orchestrator | 2025-06-22 19:54:42 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:42.876704 | orchestrator | 2025-06-22 19:54:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:45.915154 | orchestrator | 2025-06-22 19:54:45 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:45.915901 | orchestrator | 2025-06-22 19:54:45.915938 | orchestrator | 2025-06-22 19:54:45 | INFO  | Task a4bf2454-3aec-4d27-9e88-24c73a1347ca is in state SUCCESS 2025-06-22 19:54:45.917201 | orchestrator | 2025-06-22 19:54:45.917236 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:54:45.917248 | orchestrator | 2025-06-22 19:54:45.917259 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:54:45.917270 | orchestrator | Sunday 22 June 2025 19:53:39 +0000 (0:00:00.765) 0:00:00.765 *********** 2025-06-22 19:54:45.917282 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:45.917293 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:45.917305 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:45.917356 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:54:45.917368 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:54:45.917379 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:54:45.917391 | orchestrator | 2025-06-22 19:54:45.917402 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:54:45.917413 | orchestrator | Sunday 22 June 2025 19:53:40 +0000 (0:00:01.098) 0:00:01.863 *********** 2025-06-22 19:54:45.917424 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:54:45.917436 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:54:45.917447 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:54:45.917458 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:54:45.917468 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:54:45.917479 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:54:45.917490 | orchestrator | 2025-06-22 19:54:45.917501 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-22 19:54:45.917512 | orchestrator | 2025-06-22 19:54:45.917523 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-22 19:54:45.917534 | orchestrator | Sunday 22 June 2025 19:53:41 +0000 (0:00:01.149) 0:00:03.013 *********** 2025-06-22 19:54:45.917546 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:54:45.917558 | orchestrator | 2025-06-22 19:54:45.917569 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-22 19:54:45.917579 | orchestrator | Sunday 22 June 2025 19:53:43 +0000 (0:00:02.073) 0:00:05.086 *********** 2025-06-22 19:54:45.917590 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-22 19:54:45.917601 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-22 19:54:45.917706 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-22 19:54:45.917721 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-22 19:54:45.917757 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-22 19:54:45.917769 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-22 19:54:45.917779 | orchestrator | 2025-06-22 19:54:45.917790 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-22 19:54:45.917801 | orchestrator | Sunday 22 June 2025 19:53:45 +0000 (0:00:01.883) 0:00:06.970 *********** 2025-06-22 19:54:45.917855 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-22 19:54:45.917867 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-22 19:54:45.917890 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-22 19:54:45.917901 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-22 19:54:45.917912 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-22 19:54:45.917923 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-22 19:54:45.917933 | orchestrator | 2025-06-22 19:54:45.917944 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-22 19:54:45.917955 | orchestrator | Sunday 22 June 2025 19:53:47 +0000 (0:00:01.967) 0:00:08.937 *********** 2025-06-22 19:54:45.917966 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-22 19:54:45.917976 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:45.917987 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-22 19:54:45.917998 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:45.918009 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-22 19:54:45.918067 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:45.918079 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-22 19:54:45.918090 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-22 19:54:45.918101 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:45.918111 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:45.918122 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-22 19:54:45.918133 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:45.918143 | orchestrator | 2025-06-22 19:54:45.918154 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-22 19:54:45.918164 | orchestrator | Sunday 22 June 2025 19:53:49 +0000 (0:00:01.716) 0:00:10.654 *********** 2025-06-22 19:54:45.918175 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:45.918186 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:45.918197 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:45.918208 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:45.918218 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:45.918228 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:45.918239 | orchestrator | 2025-06-22 19:54:45.918250 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-22 19:54:45.918260 | orchestrator | Sunday 22 June 2025 19:53:50 +0000 (0:00:00.853) 0:00:11.508 *********** 2025-06-22 19:54:45.918288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918352 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918386 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918440 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918452 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918503 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918516 | orchestrator | 2025-06-22 19:54:45.918528 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-22 19:54:45.918541 | orchestrator | Sunday 22 June 2025 19:53:51 +0000 (0:00:01.836) 0:00:13.345 *********** 2025-06-22 19:54:45.918553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918615 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918644 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918700 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918717 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918736 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.918754 | orchestrator | 2025-06-22 19:54:45.918765 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-22 19:54:45.918776 | orchestrator | Sunday 22 June 2025 19:53:55 +0000 (0:00:03.318) 0:00:16.663 *********** 2025-06-22 19:54:45.918890 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:45.918903 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:45.918914 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:45.918924 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:45.918935 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:45.918946 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:45.918956 | orchestrator | 2025-06-22 19:54:45.918967 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-22 19:54:45.918978 | orchestrator | Sunday 22 June 2025 19:53:56 +0000 (0:00:01.364) 0:00:18.027 *********** 2025-06-22 19:54:45.919036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.919067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.919084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.919095 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.919125 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.919137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.919148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.919164 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.919175 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:45.919187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.919217 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.919229 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:45.919240 | orchestrator | 2025-06-22 19:54:45.919251 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:54:45.919262 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:02.760) 0:00:20.788 *********** 2025-06-22 19:54:45.919273 | orchestrator | 2025-06-22 19:54:45.919284 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:54:45.919294 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.268) 0:00:21.056 *********** 2025-06-22 19:54:45.919305 | orchestrator | 2025-06-22 19:54:45.919346 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:54:45.919365 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.123) 0:00:21.180 *********** 2025-06-22 19:54:45.919381 | orchestrator | 2025-06-22 19:54:45.919392 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:54:45.919402 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.126) 0:00:21.306 *********** 2025-06-22 19:54:45.919413 | orchestrator | 2025-06-22 19:54:45.919423 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:54:45.919434 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.139) 0:00:21.446 *********** 2025-06-22 19:54:45.919445 | orchestrator | 2025-06-22 19:54:45.919455 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:54:45.919466 | orchestrator | Sunday 22 June 2025 19:54:00 +0000 (0:00:00.123) 0:00:21.569 *********** 2025-06-22 19:54:45.919477 | orchestrator | 2025-06-22 19:54:45.919487 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-22 19:54:45.919503 | orchestrator | Sunday 22 June 2025 19:54:00 +0000 (0:00:00.405) 0:00:21.975 *********** 2025-06-22 19:54:45.919517 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:45.919529 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:45.919541 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:45.919553 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:45.919565 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:45.919577 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:45.919589 | orchestrator | 2025-06-22 19:54:45.919601 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-22 19:54:45.919619 | orchestrator | Sunday 22 June 2025 19:54:15 +0000 (0:00:15.157) 0:00:37.133 *********** 2025-06-22 19:54:45.919631 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:45.919643 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:45.919655 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:45.919667 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:54:45.919679 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:54:45.919691 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:54:45.919702 | orchestrator | 2025-06-22 19:54:45.919715 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-22 19:54:45.919727 | orchestrator | Sunday 22 June 2025 19:54:17 +0000 (0:00:01.477) 0:00:38.611 *********** 2025-06-22 19:54:45.919741 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:45.919761 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:45.919780 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:45.919795 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:45.919806 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:45.919819 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:45.919831 | orchestrator | 2025-06-22 19:54:45.919843 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-22 19:54:45.919855 | orchestrator | Sunday 22 June 2025 19:54:22 +0000 (0:00:04.883) 0:00:43.494 *********** 2025-06-22 19:54:45.919867 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-22 19:54:45.919880 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-22 19:54:45.919892 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-22 19:54:45.919908 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-22 19:54:45.919926 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-22 19:54:45.919953 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-22 19:54:45.919972 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-22 19:54:45.919989 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-22 19:54:45.920005 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-22 19:54:45.920021 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-22 19:54:45.920039 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-22 19:54:45.920054 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-22 19:54:45.920071 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:54:45.920088 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:54:45.920125 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:54:45.920145 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:54:45.920162 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:54:45.920181 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:54:45.920280 | orchestrator | 2025-06-22 19:54:45.920304 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-22 19:54:45.920378 | orchestrator | Sunday 22 June 2025 19:54:28 +0000 (0:00:06.698) 0:00:50.193 *********** 2025-06-22 19:54:45.920576 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-22 19:54:45.920645 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:45.920658 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-22 19:54:45.920669 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:45.920680 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-22 19:54:45.920690 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:45.920701 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-22 19:54:45.920712 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-22 19:54:45.920723 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-22 19:54:45.920734 | orchestrator | 2025-06-22 19:54:45.920744 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-22 19:54:45.920755 | orchestrator | Sunday 22 June 2025 19:54:30 +0000 (0:00:02.146) 0:00:52.340 *********** 2025-06-22 19:54:45.920775 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-22 19:54:45.920786 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:45.920797 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-22 19:54:45.920808 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:45.920818 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-22 19:54:45.920829 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:45.920840 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-22 19:54:45.920850 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-22 19:54:45.920861 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-22 19:54:45.920872 | orchestrator | 2025-06-22 19:54:45.920882 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-22 19:54:45.920893 | orchestrator | Sunday 22 June 2025 19:54:34 +0000 (0:00:03.607) 0:00:55.949 *********** 2025-06-22 19:54:45.920904 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:45.920915 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:45.920925 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:45.920936 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:45.920946 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:45.920957 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:45.920968 | orchestrator | 2025-06-22 19:54:45.921178 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:54:45.921192 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:54:45.921204 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:54:45.921215 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:54:45.921227 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:54:45.921238 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:54:45.921263 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:54:45.921274 | orchestrator | 2025-06-22 19:54:45.921285 | orchestrator | 2025-06-22 19:54:45.921296 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:54:45.921342 | orchestrator | Sunday 22 June 2025 19:54:42 +0000 (0:00:07.949) 0:01:03.898 *********** 2025-06-22 19:54:45.921354 | orchestrator | =============================================================================== 2025-06-22 19:54:45.921365 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 15.16s 2025-06-22 19:54:45.921376 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 12.83s 2025-06-22 19:54:45.921387 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.70s 2025-06-22 19:54:45.921397 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.61s 2025-06-22 19:54:45.921408 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.32s 2025-06-22 19:54:45.921418 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.76s 2025-06-22 19:54:45.921429 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.15s 2025-06-22 19:54:45.921439 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.07s 2025-06-22 19:54:45.921450 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.97s 2025-06-22 19:54:45.921460 | orchestrator | module-load : Load modules ---------------------------------------------- 1.88s 2025-06-22 19:54:45.921471 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.84s 2025-06-22 19:54:45.921482 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.72s 2025-06-22 19:54:45.921492 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.48s 2025-06-22 19:54:45.921503 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.36s 2025-06-22 19:54:45.921513 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.19s 2025-06-22 19:54:45.921524 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.15s 2025-06-22 19:54:45.921534 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.10s 2025-06-22 19:54:45.921545 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.85s 2025-06-22 19:54:45.921556 | orchestrator | 2025-06-22 19:54:45 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:45.921567 | orchestrator | 2025-06-22 19:54:45 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:45.921577 | orchestrator | 2025-06-22 19:54:45 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:54:45.921588 | orchestrator | 2025-06-22 19:54:45 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:45.921606 | orchestrator | 2025-06-22 19:54:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:48.981272 | orchestrator | 2025-06-22 19:54:48 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:48.981574 | orchestrator | 2025-06-22 19:54:48 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:48.982447 | orchestrator | 2025-06-22 19:54:48 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:48.988988 | orchestrator | 2025-06-22 19:54:48 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:54:48.992084 | orchestrator | 2025-06-22 19:54:48 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:48.992113 | orchestrator | 2025-06-22 19:54:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:52.033464 | orchestrator | 2025-06-22 19:54:52 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:52.033565 | orchestrator | 2025-06-22 19:54:52 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:52.034642 | orchestrator | 2025-06-22 19:54:52 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:52.035290 | orchestrator | 2025-06-22 19:54:52 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:54:52.039247 | orchestrator | 2025-06-22 19:54:52 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:52.039344 | orchestrator | 2025-06-22 19:54:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:55.075727 | orchestrator | 2025-06-22 19:54:55 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:55.077130 | orchestrator | 2025-06-22 19:54:55 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:55.081598 | orchestrator | 2025-06-22 19:54:55 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:55.081681 | orchestrator | 2025-06-22 19:54:55 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:54:55.081697 | orchestrator | 2025-06-22 19:54:55 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:55.081721 | orchestrator | 2025-06-22 19:54:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:58.112731 | orchestrator | 2025-06-22 19:54:58 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:54:58.116985 | orchestrator | 2025-06-22 19:54:58 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:54:58.120706 | orchestrator | 2025-06-22 19:54:58 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:54:58.121114 | orchestrator | 2025-06-22 19:54:58 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:54:58.124711 | orchestrator | 2025-06-22 19:54:58 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:54:58.124748 | orchestrator | 2025-06-22 19:54:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:01.150216 | orchestrator | 2025-06-22 19:55:01 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:01.151590 | orchestrator | 2025-06-22 19:55:01 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:01.152050 | orchestrator | 2025-06-22 19:55:01 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:55:01.152627 | orchestrator | 2025-06-22 19:55:01 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:01.153242 | orchestrator | 2025-06-22 19:55:01 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:01.153270 | orchestrator | 2025-06-22 19:55:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:04.176822 | orchestrator | 2025-06-22 19:55:04 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:04.176942 | orchestrator | 2025-06-22 19:55:04 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:04.177392 | orchestrator | 2025-06-22 19:55:04 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:55:04.178706 | orchestrator | 2025-06-22 19:55:04 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:04.180121 | orchestrator | 2025-06-22 19:55:04 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:04.180165 | orchestrator | 2025-06-22 19:55:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:07.213087 | orchestrator | 2025-06-22 19:55:07 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:07.214532 | orchestrator | 2025-06-22 19:55:07 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:07.216185 | orchestrator | 2025-06-22 19:55:07 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state STARTED 2025-06-22 19:55:07.217936 | orchestrator | 2025-06-22 19:55:07 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:07.219139 | orchestrator | 2025-06-22 19:55:07 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:07.219171 | orchestrator | 2025-06-22 19:55:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:10.260251 | orchestrator | 2025-06-22 19:55:10 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:10.267604 | orchestrator | 2025-06-22 19:55:10 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:10.270678 | orchestrator | 2025-06-22 19:55:10.270755 | orchestrator | 2025-06-22 19:55:10.270772 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-22 19:55:10.270785 | orchestrator | 2025-06-22 19:55:10.270796 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-22 19:55:10.270808 | orchestrator | Sunday 22 June 2025 19:50:57 +0000 (0:00:00.241) 0:00:00.241 *********** 2025-06-22 19:55:10.270964 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:55:10.270981 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:55:10.270992 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:55:10.271003 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.271014 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.271025 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.271036 | orchestrator | 2025-06-22 19:55:10.271047 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-22 19:55:10.271058 | orchestrator | Sunday 22 June 2025 19:50:58 +0000 (0:00:00.674) 0:00:00.915 *********** 2025-06-22 19:55:10.271069 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.271080 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.271091 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.271102 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.271113 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.271124 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.271135 | orchestrator | 2025-06-22 19:55:10.271145 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-22 19:55:10.271157 | orchestrator | Sunday 22 June 2025 19:50:58 +0000 (0:00:00.642) 0:00:01.558 *********** 2025-06-22 19:55:10.271168 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.271178 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.271189 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.271200 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.271211 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.271222 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.271233 | orchestrator | 2025-06-22 19:55:10.271244 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-22 19:55:10.271255 | orchestrator | Sunday 22 June 2025 19:50:59 +0000 (0:00:00.743) 0:00:02.302 *********** 2025-06-22 19:55:10.271266 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:55:10.271277 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:55:10.271288 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:55:10.271299 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.271310 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.271377 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.271388 | orchestrator | 2025-06-22 19:55:10.271398 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-22 19:55:10.271409 | orchestrator | Sunday 22 June 2025 19:51:01 +0000 (0:00:01.972) 0:00:04.274 *********** 2025-06-22 19:55:10.271445 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:55:10.271456 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:55:10.271467 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:55:10.271478 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.271489 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.271500 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.271511 | orchestrator | 2025-06-22 19:55:10.271522 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-22 19:55:10.271533 | orchestrator | Sunday 22 June 2025 19:51:02 +0000 (0:00:01.482) 0:00:05.756 *********** 2025-06-22 19:55:10.271544 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:55:10.271555 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:55:10.271567 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:55:10.271580 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.271592 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.271604 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.271616 | orchestrator | 2025-06-22 19:55:10.271629 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-22 19:55:10.271641 | orchestrator | Sunday 22 June 2025 19:51:05 +0000 (0:00:02.265) 0:00:08.022 *********** 2025-06-22 19:55:10.271653 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.271665 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.271677 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.271689 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.271701 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.271713 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.271725 | orchestrator | 2025-06-22 19:55:10.271737 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-22 19:55:10.271750 | orchestrator | Sunday 22 June 2025 19:51:05 +0000 (0:00:00.757) 0:00:08.779 *********** 2025-06-22 19:55:10.271762 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.271787 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.271800 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.271812 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.271824 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.271836 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.271848 | orchestrator | 2025-06-22 19:55:10.271861 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-22 19:55:10.271873 | orchestrator | Sunday 22 June 2025 19:51:06 +0000 (0:00:00.759) 0:00:09.539 *********** 2025-06-22 19:55:10.271886 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:55:10.271899 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:55:10.271911 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.271923 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:55:10.271934 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:55:10.271945 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.271956 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:55:10.271978 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:55:10.271989 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.272000 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:55:10.272029 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:55:10.272040 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.272051 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:55:10.272062 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:55:10.272073 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.272091 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:55:10.272103 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:55:10.272113 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.272124 | orchestrator | 2025-06-22 19:55:10.272135 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-22 19:55:10.272146 | orchestrator | Sunday 22 June 2025 19:51:07 +0000 (0:00:01.054) 0:00:10.593 *********** 2025-06-22 19:55:10.272157 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.272168 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.272179 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.272190 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.272200 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.272211 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.272222 | orchestrator | 2025-06-22 19:55:10.272233 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-22 19:55:10.272245 | orchestrator | Sunday 22 June 2025 19:51:08 +0000 (0:00:01.075) 0:00:11.669 *********** 2025-06-22 19:55:10.272256 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:55:10.272267 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:55:10.272277 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:55:10.272288 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.272299 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.272310 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.272352 | orchestrator | 2025-06-22 19:55:10.272363 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-22 19:55:10.272374 | orchestrator | Sunday 22 June 2025 19:51:09 +0000 (0:00:00.633) 0:00:12.302 *********** 2025-06-22 19:55:10.272385 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:55:10.272396 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:55:10.272407 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.272418 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.272429 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:55:10.272440 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.272450 | orchestrator | 2025-06-22 19:55:10.272461 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-22 19:55:10.272472 | orchestrator | Sunday 22 June 2025 19:51:15 +0000 (0:00:06.143) 0:00:18.445 *********** 2025-06-22 19:55:10.272483 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.272494 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.272505 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.272515 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.272526 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.272537 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.272548 | orchestrator | 2025-06-22 19:55:10.272559 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-22 19:55:10.272570 | orchestrator | Sunday 22 June 2025 19:51:16 +0000 (0:00:01.284) 0:00:19.729 *********** 2025-06-22 19:55:10.272581 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.272592 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.272602 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.272613 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.272624 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.272634 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.272645 | orchestrator | 2025-06-22 19:55:10.272656 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-22 19:55:10.272668 | orchestrator | Sunday 22 June 2025 19:51:19 +0000 (0:00:02.118) 0:00:21.848 *********** 2025-06-22 19:55:10.272679 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.272690 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.272700 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.272711 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.272728 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.272739 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.272750 | orchestrator | 2025-06-22 19:55:10.272761 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-22 19:55:10.272777 | orchestrator | Sunday 22 June 2025 19:51:20 +0000 (0:00:00.987) 0:00:22.835 *********** 2025-06-22 19:55:10.272788 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-22 19:55:10.272799 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-22 19:55:10.272810 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.272821 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-22 19:55:10.272831 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-22 19:55:10.272842 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.272853 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-22 19:55:10.272864 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-22 19:55:10.272874 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.272885 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-22 19:55:10.272896 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-22 19:55:10.272907 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.272917 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-22 19:55:10.272928 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-22 19:55:10.272939 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.272950 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-22 19:55:10.272961 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-22 19:55:10.272972 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.272982 | orchestrator | 2025-06-22 19:55:10.272993 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-22 19:55:10.273011 | orchestrator | Sunday 22 June 2025 19:51:21 +0000 (0:00:01.056) 0:00:23.891 *********** 2025-06-22 19:55:10.273023 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.273034 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.273044 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.273055 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.273066 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.273077 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.273087 | orchestrator | 2025-06-22 19:55:10.273098 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-22 19:55:10.273109 | orchestrator | 2025-06-22 19:55:10.273120 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-22 19:55:10.273131 | orchestrator | Sunday 22 June 2025 19:51:22 +0000 (0:00:01.489) 0:00:25.380 *********** 2025-06-22 19:55:10.273142 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.273153 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.273164 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.273174 | orchestrator | 2025-06-22 19:55:10.273185 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-22 19:55:10.273196 | orchestrator | Sunday 22 June 2025 19:51:23 +0000 (0:00:01.178) 0:00:26.559 *********** 2025-06-22 19:55:10.273207 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.273218 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.273229 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.273239 | orchestrator | 2025-06-22 19:55:10.273250 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-22 19:55:10.273261 | orchestrator | Sunday 22 June 2025 19:51:25 +0000 (0:00:01.397) 0:00:27.957 *********** 2025-06-22 19:55:10.273272 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.273283 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.273294 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.273304 | orchestrator | 2025-06-22 19:55:10.273333 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-22 19:55:10.273351 | orchestrator | Sunday 22 June 2025 19:51:26 +0000 (0:00:01.546) 0:00:29.503 *********** 2025-06-22 19:55:10.273362 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.273373 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.273384 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.273394 | orchestrator | 2025-06-22 19:55:10.273405 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-22 19:55:10.273416 | orchestrator | Sunday 22 June 2025 19:51:27 +0000 (0:00:01.136) 0:00:30.640 *********** 2025-06-22 19:55:10.273427 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.273438 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.273449 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.273460 | orchestrator | 2025-06-22 19:55:10.273471 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-22 19:55:10.273482 | orchestrator | Sunday 22 June 2025 19:51:28 +0000 (0:00:00.410) 0:00:31.050 *********** 2025-06-22 19:55:10.273493 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:55:10.273504 | orchestrator | 2025-06-22 19:55:10.273515 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-22 19:55:10.273526 | orchestrator | Sunday 22 June 2025 19:51:28 +0000 (0:00:00.612) 0:00:31.662 *********** 2025-06-22 19:55:10.273537 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.273548 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.273559 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.273570 | orchestrator | 2025-06-22 19:55:10.273581 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-22 19:55:10.273591 | orchestrator | Sunday 22 June 2025 19:51:31 +0000 (0:00:02.769) 0:00:34.431 *********** 2025-06-22 19:55:10.273602 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.273613 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.273624 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.273635 | orchestrator | 2025-06-22 19:55:10.273645 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-22 19:55:10.273656 | orchestrator | Sunday 22 June 2025 19:51:32 +0000 (0:00:00.971) 0:00:35.403 *********** 2025-06-22 19:55:10.273667 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.273678 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.273689 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.273700 | orchestrator | 2025-06-22 19:55:10.273710 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-22 19:55:10.273721 | orchestrator | Sunday 22 June 2025 19:51:33 +0000 (0:00:00.806) 0:00:36.210 *********** 2025-06-22 19:55:10.273732 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.273743 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.273758 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.273769 | orchestrator | 2025-06-22 19:55:10.273781 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-22 19:55:10.273792 | orchestrator | Sunday 22 June 2025 19:51:35 +0000 (0:00:02.409) 0:00:38.619 *********** 2025-06-22 19:55:10.273803 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.273813 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.273825 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.273835 | orchestrator | 2025-06-22 19:55:10.273846 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-22 19:55:10.273857 | orchestrator | Sunday 22 June 2025 19:51:36 +0000 (0:00:00.384) 0:00:39.004 *********** 2025-06-22 19:55:10.273868 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.273879 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.273890 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.273901 | orchestrator | 2025-06-22 19:55:10.273912 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-22 19:55:10.273923 | orchestrator | Sunday 22 June 2025 19:51:36 +0000 (0:00:00.533) 0:00:39.537 *********** 2025-06-22 19:55:10.273934 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.273950 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.273961 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.273972 | orchestrator | 2025-06-22 19:55:10.273983 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-22 19:55:10.273994 | orchestrator | Sunday 22 June 2025 19:51:39 +0000 (0:00:02.808) 0:00:42.345 *********** 2025-06-22 19:55:10.274011 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-22 19:55:10.274077 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-22 19:55:10.274089 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-22 19:55:10.274100 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-22 19:55:10.274111 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-22 19:55:10.274122 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-22 19:55:10.274133 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-22 19:55:10.274144 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-22 19:55:10.274156 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-22 19:55:10.274167 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-22 19:55:10.274178 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-22 19:55:10.274189 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-22 19:55:10.274200 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-22 19:55:10.274211 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-22 19:55:10.274221 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-22 19:55:10.274232 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.274244 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.274254 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.274265 | orchestrator | 2025-06-22 19:55:10.274276 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-22 19:55:10.274287 | orchestrator | Sunday 22 June 2025 19:52:34 +0000 (0:00:55.375) 0:01:37.721 *********** 2025-06-22 19:55:10.274298 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.274309 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.274338 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.274349 | orchestrator | 2025-06-22 19:55:10.274360 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-22 19:55:10.274371 | orchestrator | Sunday 22 June 2025 19:52:35 +0000 (0:00:00.298) 0:01:38.019 *********** 2025-06-22 19:55:10.274382 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.274393 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.274403 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.274421 | orchestrator | 2025-06-22 19:55:10.274432 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-22 19:55:10.274443 | orchestrator | Sunday 22 June 2025 19:52:36 +0000 (0:00:01.096) 0:01:39.116 *********** 2025-06-22 19:55:10.274455 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.274466 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.274481 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.274493 | orchestrator | 2025-06-22 19:55:10.274504 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-22 19:55:10.274515 | orchestrator | Sunday 22 June 2025 19:52:37 +0000 (0:00:01.393) 0:01:40.509 *********** 2025-06-22 19:55:10.274526 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.274537 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.274548 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.274559 | orchestrator | 2025-06-22 19:55:10.274569 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-22 19:55:10.274581 | orchestrator | Sunday 22 June 2025 19:52:52 +0000 (0:00:15.092) 0:01:55.602 *********** 2025-06-22 19:55:10.274592 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.274603 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.274614 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.274625 | orchestrator | 2025-06-22 19:55:10.274636 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-22 19:55:10.274647 | orchestrator | Sunday 22 June 2025 19:52:53 +0000 (0:00:00.714) 0:01:56.317 *********** 2025-06-22 19:55:10.274658 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.274669 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.274680 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.274690 | orchestrator | 2025-06-22 19:55:10.274701 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-22 19:55:10.274712 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:00.673) 0:01:56.990 *********** 2025-06-22 19:55:10.274723 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.274734 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.274745 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.274756 | orchestrator | 2025-06-22 19:55:10.274775 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-22 19:55:10.274786 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:00.712) 0:01:57.702 *********** 2025-06-22 19:55:10.274797 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.274808 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.274818 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.274829 | orchestrator | 2025-06-22 19:55:10.274840 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-22 19:55:10.274851 | orchestrator | Sunday 22 June 2025 19:52:55 +0000 (0:00:01.001) 0:01:58.704 *********** 2025-06-22 19:55:10.274862 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.274873 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.274884 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.274894 | orchestrator | 2025-06-22 19:55:10.274905 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-22 19:55:10.274916 | orchestrator | Sunday 22 June 2025 19:52:56 +0000 (0:00:00.318) 0:01:59.022 *********** 2025-06-22 19:55:10.274927 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.274938 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.274949 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.274960 | orchestrator | 2025-06-22 19:55:10.274971 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-22 19:55:10.274982 | orchestrator | Sunday 22 June 2025 19:52:56 +0000 (0:00:00.635) 0:01:59.658 *********** 2025-06-22 19:55:10.274993 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.275003 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.275014 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.275025 | orchestrator | 2025-06-22 19:55:10.275036 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-22 19:55:10.275053 | orchestrator | Sunday 22 June 2025 19:52:57 +0000 (0:00:00.652) 0:02:00.311 *********** 2025-06-22 19:55:10.275065 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.275075 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.275086 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.275097 | orchestrator | 2025-06-22 19:55:10.275108 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-22 19:55:10.275119 | orchestrator | Sunday 22 June 2025 19:52:58 +0000 (0:00:01.108) 0:02:01.420 *********** 2025-06-22 19:55:10.275130 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:10.275141 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:10.275152 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:10.275162 | orchestrator | 2025-06-22 19:55:10.275173 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-22 19:55:10.275184 | orchestrator | Sunday 22 June 2025 19:52:59 +0000 (0:00:00.915) 0:02:02.335 *********** 2025-06-22 19:55:10.275195 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.275206 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.275217 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.275227 | orchestrator | 2025-06-22 19:55:10.275238 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-22 19:55:10.275249 | orchestrator | Sunday 22 June 2025 19:52:59 +0000 (0:00:00.293) 0:02:02.629 *********** 2025-06-22 19:55:10.275260 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.275271 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.275281 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.275292 | orchestrator | 2025-06-22 19:55:10.275303 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-22 19:55:10.275338 | orchestrator | Sunday 22 June 2025 19:53:00 +0000 (0:00:00.334) 0:02:02.964 *********** 2025-06-22 19:55:10.275349 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.275360 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.275371 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.275382 | orchestrator | 2025-06-22 19:55:10.275407 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-22 19:55:10.275419 | orchestrator | Sunday 22 June 2025 19:53:01 +0000 (0:00:01.133) 0:02:04.098 *********** 2025-06-22 19:55:10.275430 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.275441 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.275460 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.275471 | orchestrator | 2025-06-22 19:55:10.275482 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-22 19:55:10.275493 | orchestrator | Sunday 22 June 2025 19:53:01 +0000 (0:00:00.683) 0:02:04.781 *********** 2025-06-22 19:55:10.275504 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-22 19:55:10.275515 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-22 19:55:10.275526 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-22 19:55:10.275537 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-22 19:55:10.275548 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-22 19:55:10.275559 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-22 19:55:10.275570 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-22 19:55:10.275581 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-22 19:55:10.275592 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-22 19:55:10.275603 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-22 19:55:10.275621 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-22 19:55:10.275632 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-22 19:55:10.275649 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-22 19:55:10.275660 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-22 19:55:10.275671 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-22 19:55:10.275682 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-22 19:55:10.275693 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-22 19:55:10.275704 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-22 19:55:10.275715 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-22 19:55:10.275726 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-22 19:55:10.275737 | orchestrator | 2025-06-22 19:55:10.275749 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-22 19:55:10.275759 | orchestrator | 2025-06-22 19:55:10.275770 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-22 19:55:10.275781 | orchestrator | Sunday 22 June 2025 19:53:05 +0000 (0:00:03.584) 0:02:08.366 *********** 2025-06-22 19:55:10.275793 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:55:10.275804 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:55:10.275814 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:55:10.275826 | orchestrator | 2025-06-22 19:55:10.275837 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-22 19:55:10.275848 | orchestrator | Sunday 22 June 2025 19:53:05 +0000 (0:00:00.431) 0:02:08.797 *********** 2025-06-22 19:55:10.275859 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:55:10.275870 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:55:10.275881 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:55:10.275892 | orchestrator | 2025-06-22 19:55:10.275903 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-22 19:55:10.275914 | orchestrator | Sunday 22 June 2025 19:53:06 +0000 (0:00:00.587) 0:02:09.384 *********** 2025-06-22 19:55:10.275925 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:55:10.275935 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:55:10.275946 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:55:10.275957 | orchestrator | 2025-06-22 19:55:10.275968 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-22 19:55:10.275979 | orchestrator | Sunday 22 June 2025 19:53:06 +0000 (0:00:00.265) 0:02:09.650 *********** 2025-06-22 19:55:10.275990 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:55:10.276001 | orchestrator | 2025-06-22 19:55:10.276632 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-22 19:55:10.276655 | orchestrator | Sunday 22 June 2025 19:53:07 +0000 (0:00:00.532) 0:02:10.182 *********** 2025-06-22 19:55:10.276666 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.276677 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.276687 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.276698 | orchestrator | 2025-06-22 19:55:10.276709 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-22 19:55:10.276720 | orchestrator | Sunday 22 June 2025 19:53:07 +0000 (0:00:00.305) 0:02:10.488 *********** 2025-06-22 19:55:10.276731 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.276742 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.276753 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.276774 | orchestrator | 2025-06-22 19:55:10.276785 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-22 19:55:10.276796 | orchestrator | Sunday 22 June 2025 19:53:07 +0000 (0:00:00.295) 0:02:10.784 *********** 2025-06-22 19:55:10.276807 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.276818 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.276829 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.276840 | orchestrator | 2025-06-22 19:55:10.276851 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-22 19:55:10.276862 | orchestrator | Sunday 22 June 2025 19:53:08 +0000 (0:00:00.327) 0:02:11.111 *********** 2025-06-22 19:55:10.276872 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:55:10.276882 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:55:10.276892 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:55:10.276902 | orchestrator | 2025-06-22 19:55:10.276911 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-22 19:55:10.276921 | orchestrator | Sunday 22 June 2025 19:53:09 +0000 (0:00:01.399) 0:02:12.510 *********** 2025-06-22 19:55:10.276931 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:55:10.276940 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:55:10.276950 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:55:10.276959 | orchestrator | 2025-06-22 19:55:10.276969 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-22 19:55:10.276979 | orchestrator | 2025-06-22 19:55:10.276988 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-22 19:55:10.276998 | orchestrator | Sunday 22 June 2025 19:53:18 +0000 (0:00:09.314) 0:02:21.825 *********** 2025-06-22 19:55:10.277008 | orchestrator | ok: [testbed-manager] 2025-06-22 19:55:10.277017 | orchestrator | 2025-06-22 19:55:10.277027 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-22 19:55:10.277037 | orchestrator | Sunday 22 June 2025 19:53:19 +0000 (0:00:00.997) 0:02:22.822 *********** 2025-06-22 19:55:10.277047 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:10.277057 | orchestrator | 2025-06-22 19:55:10.277066 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-22 19:55:10.277080 | orchestrator | Sunday 22 June 2025 19:53:20 +0000 (0:00:00.405) 0:02:23.228 *********** 2025-06-22 19:55:10.277090 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-22 19:55:10.277100 | orchestrator | 2025-06-22 19:55:10.277119 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-22 19:55:10.277130 | orchestrator | Sunday 22 June 2025 19:53:21 +0000 (0:00:00.905) 0:02:24.133 *********** 2025-06-22 19:55:10.277139 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:10.277149 | orchestrator | 2025-06-22 19:55:10.277159 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-22 19:55:10.277169 | orchestrator | Sunday 22 June 2025 19:53:22 +0000 (0:00:00.820) 0:02:24.953 *********** 2025-06-22 19:55:10.277178 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:10.277188 | orchestrator | 2025-06-22 19:55:10.277197 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-22 19:55:10.277207 | orchestrator | Sunday 22 June 2025 19:53:22 +0000 (0:00:00.544) 0:02:25.498 *********** 2025-06-22 19:55:10.277217 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 19:55:10.277227 | orchestrator | 2025-06-22 19:55:10.277237 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-22 19:55:10.277247 | orchestrator | Sunday 22 June 2025 19:53:24 +0000 (0:00:01.462) 0:02:26.960 *********** 2025-06-22 19:55:10.277256 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 19:55:10.277266 | orchestrator | 2025-06-22 19:55:10.277276 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-22 19:55:10.277286 | orchestrator | Sunday 22 June 2025 19:53:25 +0000 (0:00:00.873) 0:02:27.834 *********** 2025-06-22 19:55:10.277295 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:10.277305 | orchestrator | 2025-06-22 19:55:10.277340 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-22 19:55:10.277350 | orchestrator | Sunday 22 June 2025 19:53:25 +0000 (0:00:00.452) 0:02:28.287 *********** 2025-06-22 19:55:10.277360 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:10.277370 | orchestrator | 2025-06-22 19:55:10.277379 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-22 19:55:10.277389 | orchestrator | 2025-06-22 19:55:10.277399 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-22 19:55:10.277409 | orchestrator | Sunday 22 June 2025 19:53:25 +0000 (0:00:00.417) 0:02:28.704 *********** 2025-06-22 19:55:10.277418 | orchestrator | ok: [testbed-manager] 2025-06-22 19:55:10.277428 | orchestrator | 2025-06-22 19:55:10.277438 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-22 19:55:10.277448 | orchestrator | Sunday 22 June 2025 19:53:26 +0000 (0:00:00.146) 0:02:28.850 *********** 2025-06-22 19:55:10.277457 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:55:10.277467 | orchestrator | 2025-06-22 19:55:10.277477 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-22 19:55:10.277487 | orchestrator | Sunday 22 June 2025 19:53:26 +0000 (0:00:00.427) 0:02:29.278 *********** 2025-06-22 19:55:10.277496 | orchestrator | ok: [testbed-manager] 2025-06-22 19:55:10.277506 | orchestrator | 2025-06-22 19:55:10.277516 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-22 19:55:10.277526 | orchestrator | Sunday 22 June 2025 19:53:27 +0000 (0:00:00.826) 0:02:30.104 *********** 2025-06-22 19:55:10.277535 | orchestrator | ok: [testbed-manager] 2025-06-22 19:55:10.277545 | orchestrator | 2025-06-22 19:55:10.277555 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-22 19:55:10.277564 | orchestrator | Sunday 22 June 2025 19:53:28 +0000 (0:00:01.590) 0:02:31.695 *********** 2025-06-22 19:55:10.277574 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:10.277584 | orchestrator | 2025-06-22 19:55:10.277593 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-22 19:55:10.277603 | orchestrator | Sunday 22 June 2025 19:53:29 +0000 (0:00:00.754) 0:02:32.449 *********** 2025-06-22 19:55:10.277613 | orchestrator | ok: [testbed-manager] 2025-06-22 19:55:10.277623 | orchestrator | 2025-06-22 19:55:10.277632 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-22 19:55:10.277642 | orchestrator | Sunday 22 June 2025 19:53:30 +0000 (0:00:00.448) 0:02:32.898 *********** 2025-06-22 19:55:10.277652 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:10.277661 | orchestrator | 2025-06-22 19:55:10.277671 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-22 19:55:10.277681 | orchestrator | Sunday 22 June 2025 19:53:36 +0000 (0:00:06.018) 0:02:38.916 *********** 2025-06-22 19:55:10.277690 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:10.277700 | orchestrator | 2025-06-22 19:55:10.277710 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-22 19:55:10.277719 | orchestrator | Sunday 22 June 2025 19:53:46 +0000 (0:00:10.564) 0:02:49.480 *********** 2025-06-22 19:55:10.277729 | orchestrator | ok: [testbed-manager] 2025-06-22 19:55:10.277739 | orchestrator | 2025-06-22 19:55:10.277748 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-22 19:55:10.277758 | orchestrator | 2025-06-22 19:55:10.277768 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-22 19:55:10.277777 | orchestrator | Sunday 22 June 2025 19:53:47 +0000 (0:00:00.469) 0:02:49.950 *********** 2025-06-22 19:55:10.277787 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.277797 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.277806 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.277816 | orchestrator | 2025-06-22 19:55:10.277826 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-22 19:55:10.277836 | orchestrator | Sunday 22 June 2025 19:53:47 +0000 (0:00:00.433) 0:02:50.383 *********** 2025-06-22 19:55:10.277850 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.277860 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.277870 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.277879 | orchestrator | 2025-06-22 19:55:10.277889 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-22 19:55:10.277899 | orchestrator | Sunday 22 June 2025 19:53:47 +0000 (0:00:00.263) 0:02:50.647 *********** 2025-06-22 19:55:10.277913 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:55:10.277923 | orchestrator | 2025-06-22 19:55:10.277933 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-22 19:55:10.277948 | orchestrator | Sunday 22 June 2025 19:53:48 +0000 (0:00:00.458) 0:02:51.105 *********** 2025-06-22 19:55:10.277958 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:55:10.277968 | orchestrator | 2025-06-22 19:55:10.277978 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-22 19:55:10.277987 | orchestrator | Sunday 22 June 2025 19:53:49 +0000 (0:00:01.102) 0:02:52.208 *********** 2025-06-22 19:55:10.277997 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:55:10.278008 | orchestrator | 2025-06-22 19:55:10.278074 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-22 19:55:10.278085 | orchestrator | Sunday 22 June 2025 19:53:50 +0000 (0:00:00.867) 0:02:53.075 *********** 2025-06-22 19:55:10.278095 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.278104 | orchestrator | 2025-06-22 19:55:10.278114 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-22 19:55:10.278124 | orchestrator | Sunday 22 June 2025 19:53:50 +0000 (0:00:00.237) 0:02:53.312 *********** 2025-06-22 19:55:10.278134 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:55:10.278143 | orchestrator | 2025-06-22 19:55:10.278153 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-22 19:55:10.278163 | orchestrator | Sunday 22 June 2025 19:53:51 +0000 (0:00:00.914) 0:02:54.227 *********** 2025-06-22 19:55:10.278172 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.278182 | orchestrator | 2025-06-22 19:55:10.278192 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-22 19:55:10.278202 | orchestrator | Sunday 22 June 2025 19:53:51 +0000 (0:00:00.159) 0:02:54.386 *********** 2025-06-22 19:55:10.278212 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.278221 | orchestrator | 2025-06-22 19:55:10.278231 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-22 19:55:10.278241 | orchestrator | Sunday 22 June 2025 19:53:51 +0000 (0:00:00.179) 0:02:54.566 *********** 2025-06-22 19:55:10.278250 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.278260 | orchestrator | 2025-06-22 19:55:10.278269 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-22 19:55:10.278279 | orchestrator | Sunday 22 June 2025 19:53:51 +0000 (0:00:00.183) 0:02:54.749 *********** 2025-06-22 19:55:10.278289 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.278298 | orchestrator | 2025-06-22 19:55:10.278308 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-22 19:55:10.278332 | orchestrator | Sunday 22 June 2025 19:53:52 +0000 (0:00:00.165) 0:02:54.914 *********** 2025-06-22 19:55:10.278342 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:55:10.278351 | orchestrator | 2025-06-22 19:55:10.278361 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-22 19:55:10.278371 | orchestrator | Sunday 22 June 2025 19:53:57 +0000 (0:00:04.937) 0:02:59.851 *********** 2025-06-22 19:55:10.278380 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-22 19:55:10.278390 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-22 19:55:10.278400 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-22 19:55:10.278416 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-22 19:55:10.278426 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-22 19:55:10.278435 | orchestrator | 2025-06-22 19:55:10.278445 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-22 19:55:10.278455 | orchestrator | Sunday 22 June 2025 19:54:39 +0000 (0:00:42.776) 0:03:42.628 *********** 2025-06-22 19:55:10.278465 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:55:10.278474 | orchestrator | 2025-06-22 19:55:10.278484 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-22 19:55:10.278494 | orchestrator | Sunday 22 June 2025 19:54:41 +0000 (0:00:01.425) 0:03:44.054 *********** 2025-06-22 19:55:10.278504 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:55:10.278513 | orchestrator | 2025-06-22 19:55:10.278523 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-22 19:55:10.278533 | orchestrator | Sunday 22 June 2025 19:54:43 +0000 (0:00:01.938) 0:03:45.993 *********** 2025-06-22 19:55:10.278543 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:55:10.278552 | orchestrator | 2025-06-22 19:55:10.278562 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-22 19:55:10.278572 | orchestrator | Sunday 22 June 2025 19:54:44 +0000 (0:00:01.401) 0:03:47.394 *********** 2025-06-22 19:55:10.278581 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.278591 | orchestrator | 2025-06-22 19:55:10.278601 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-22 19:55:10.278610 | orchestrator | Sunday 22 June 2025 19:54:44 +0000 (0:00:00.215) 0:03:47.610 *********** 2025-06-22 19:55:10.278620 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-22 19:55:10.278629 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-22 19:55:10.278639 | orchestrator | 2025-06-22 19:55:10.278649 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-22 19:55:10.278659 | orchestrator | Sunday 22 June 2025 19:54:46 +0000 (0:00:01.977) 0:03:49.587 *********** 2025-06-22 19:55:10.278668 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.278678 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.278688 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.278698 | orchestrator | 2025-06-22 19:55:10.278707 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-22 19:55:10.278722 | orchestrator | Sunday 22 June 2025 19:54:47 +0000 (0:00:00.370) 0:03:49.957 *********** 2025-06-22 19:55:10.278732 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.278742 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.278751 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.278761 | orchestrator | 2025-06-22 19:55:10.278777 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-22 19:55:10.278787 | orchestrator | 2025-06-22 19:55:10.278797 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-22 19:55:10.278807 | orchestrator | Sunday 22 June 2025 19:54:48 +0000 (0:00:00.886) 0:03:50.844 *********** 2025-06-22 19:55:10.278816 | orchestrator | ok: [testbed-manager] 2025-06-22 19:55:10.278826 | orchestrator | 2025-06-22 19:55:10.278836 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-22 19:55:10.278846 | orchestrator | Sunday 22 June 2025 19:54:48 +0000 (0:00:00.385) 0:03:51.229 *********** 2025-06-22 19:55:10.278856 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:55:10.278866 | orchestrator | 2025-06-22 19:55:10.278875 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-22 19:55:10.278885 | orchestrator | Sunday 22 June 2025 19:54:48 +0000 (0:00:00.238) 0:03:51.468 *********** 2025-06-22 19:55:10.278895 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:10.278910 | orchestrator | 2025-06-22 19:55:10.278920 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-22 19:55:10.278929 | orchestrator | 2025-06-22 19:55:10.278939 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-22 19:55:10.278949 | orchestrator | Sunday 22 June 2025 19:54:54 +0000 (0:00:05.628) 0:03:57.096 *********** 2025-06-22 19:55:10.278959 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:55:10.278968 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:55:10.278978 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:55:10.278988 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:10.278997 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:10.279007 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:10.279016 | orchestrator | 2025-06-22 19:55:10.279026 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-22 19:55:10.279036 | orchestrator | Sunday 22 June 2025 19:54:54 +0000 (0:00:00.642) 0:03:57.739 *********** 2025-06-22 19:55:10.279045 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-22 19:55:10.279055 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-22 19:55:10.279065 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-22 19:55:10.279074 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-22 19:55:10.279084 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-22 19:55:10.279093 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-22 19:55:10.279103 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-22 19:55:10.279113 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-22 19:55:10.279123 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-22 19:55:10.279132 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-22 19:55:10.279142 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-22 19:55:10.279151 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-22 19:55:10.279161 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-22 19:55:10.279171 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-22 19:55:10.279180 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-22 19:55:10.279190 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-22 19:55:10.279200 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-22 19:55:10.279209 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-22 19:55:10.279219 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-22 19:55:10.279240 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-22 19:55:10.279250 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-22 19:55:10.279268 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-22 19:55:10.279278 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-22 19:55:10.279287 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-22 19:55:10.279297 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-22 19:55:10.279306 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-22 19:55:10.279365 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-22 19:55:10.279377 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-22 19:55:10.279390 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-22 19:55:10.279401 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-22 19:55:10.279410 | orchestrator | 2025-06-22 19:55:10.279426 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-22 19:55:10.279436 | orchestrator | Sunday 22 June 2025 19:55:06 +0000 (0:00:11.105) 0:04:08.844 *********** 2025-06-22 19:55:10.279445 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.279455 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.279465 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.279474 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.279484 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.279493 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.279503 | orchestrator | 2025-06-22 19:55:10.279512 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-22 19:55:10.279522 | orchestrator | Sunday 22 June 2025 19:55:06 +0000 (0:00:00.623) 0:04:09.467 *********** 2025-06-22 19:55:10.279532 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:55:10.279541 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:55:10.279551 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:55:10.279560 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:10.279570 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:10.279579 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:10.279589 | orchestrator | 2025-06-22 19:55:10.279598 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:55:10.279608 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:55:10.279619 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-22 19:55:10.279629 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-22 19:55:10.279639 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-22 19:55:10.279649 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-22 19:55:10.279658 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-22 19:55:10.279668 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-22 19:55:10.279677 | orchestrator | 2025-06-22 19:55:10.279687 | orchestrator | 2025-06-22 19:55:10.279697 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:55:10.279705 | orchestrator | Sunday 22 June 2025 19:55:07 +0000 (0:00:00.420) 0:04:09.887 *********** 2025-06-22 19:55:10.279713 | orchestrator | =============================================================================== 2025-06-22 19:55:10.279721 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.38s 2025-06-22 19:55:10.279729 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.78s 2025-06-22 19:55:10.279737 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 15.09s 2025-06-22 19:55:10.279744 | orchestrator | Manage labels ---------------------------------------------------------- 11.11s 2025-06-22 19:55:10.279757 | orchestrator | kubectl : Install required packages ------------------------------------ 10.56s 2025-06-22 19:55:10.279765 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.31s 2025-06-22 19:55:10.279773 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.14s 2025-06-22 19:55:10.279781 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.02s 2025-06-22 19:55:10.279788 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.63s 2025-06-22 19:55:10.279796 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.94s 2025-06-22 19:55:10.279804 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.58s 2025-06-22 19:55:10.279812 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.81s 2025-06-22 19:55:10.279820 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.77s 2025-06-22 19:55:10.279828 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.41s 2025-06-22 19:55:10.279835 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.27s 2025-06-22 19:55:10.279843 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.12s 2025-06-22 19:55:10.279851 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 1.98s 2025-06-22 19:55:10.279859 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 1.97s 2025-06-22 19:55:10.279867 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.94s 2025-06-22 19:55:10.279874 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.59s 2025-06-22 19:55:10.279885 | orchestrator | 2025-06-22 19:55:10 | INFO  | Task 5cab8435-56c7-4452-9437-e02047ebb90d is in state SUCCESS 2025-06-22 19:55:10.279893 | orchestrator | 2025-06-22 19:55:10 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:10.279905 | orchestrator | 2025-06-22 19:55:10 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:10.279914 | orchestrator | 2025-06-22 19:55:10 | INFO  | Task 282a8a33-f382-405c-8904-0b2708025428 is in state STARTED 2025-06-22 19:55:10.279922 | orchestrator | 2025-06-22 19:55:10 | INFO  | Task 1b9a1ee8-1839-47f6-9ba3-4eba1aa6a2f3 is in state STARTED 2025-06-22 19:55:10.279929 | orchestrator | 2025-06-22 19:55:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:13.312258 | orchestrator | 2025-06-22 19:55:13 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:13.312380 | orchestrator | 2025-06-22 19:55:13 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:13.312693 | orchestrator | 2025-06-22 19:55:13 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:13.313172 | orchestrator | 2025-06-22 19:55:13 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:13.315059 | orchestrator | 2025-06-22 19:55:13 | INFO  | Task 282a8a33-f382-405c-8904-0b2708025428 is in state STARTED 2025-06-22 19:55:13.315641 | orchestrator | 2025-06-22 19:55:13 | INFO  | Task 1b9a1ee8-1839-47f6-9ba3-4eba1aa6a2f3 is in state SUCCESS 2025-06-22 19:55:13.315665 | orchestrator | 2025-06-22 19:55:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:16.346791 | orchestrator | 2025-06-22 19:55:16 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:16.346877 | orchestrator | 2025-06-22 19:55:16 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:16.347723 | orchestrator | 2025-06-22 19:55:16 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:16.349136 | orchestrator | 2025-06-22 19:55:16 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:16.349906 | orchestrator | 2025-06-22 19:55:16 | INFO  | Task 282a8a33-f382-405c-8904-0b2708025428 is in state STARTED 2025-06-22 19:55:16.351431 | orchestrator | 2025-06-22 19:55:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:19.390128 | orchestrator | 2025-06-22 19:55:19 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:19.390213 | orchestrator | 2025-06-22 19:55:19 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:19.391011 | orchestrator | 2025-06-22 19:55:19 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:19.391305 | orchestrator | 2025-06-22 19:55:19 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:19.391762 | orchestrator | 2025-06-22 19:55:19 | INFO  | Task 282a8a33-f382-405c-8904-0b2708025428 is in state SUCCESS 2025-06-22 19:55:19.393205 | orchestrator | 2025-06-22 19:55:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:22.428172 | orchestrator | 2025-06-22 19:55:22 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:22.428571 | orchestrator | 2025-06-22 19:55:22 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:22.429701 | orchestrator | 2025-06-22 19:55:22 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:22.430490 | orchestrator | 2025-06-22 19:55:22 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:22.430536 | orchestrator | 2025-06-22 19:55:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:25.475747 | orchestrator | 2025-06-22 19:55:25 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:25.476409 | orchestrator | 2025-06-22 19:55:25 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:25.476980 | orchestrator | 2025-06-22 19:55:25 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:25.478400 | orchestrator | 2025-06-22 19:55:25 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:25.478489 | orchestrator | 2025-06-22 19:55:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:28.527997 | orchestrator | 2025-06-22 19:55:28 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:28.531663 | orchestrator | 2025-06-22 19:55:28 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:28.532196 | orchestrator | 2025-06-22 19:55:28 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:28.535383 | orchestrator | 2025-06-22 19:55:28 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:28.535435 | orchestrator | 2025-06-22 19:55:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:31.596638 | orchestrator | 2025-06-22 19:55:31 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:31.597351 | orchestrator | 2025-06-22 19:55:31 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:31.599154 | orchestrator | 2025-06-22 19:55:31 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:31.599191 | orchestrator | 2025-06-22 19:55:31 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:31.599229 | orchestrator | 2025-06-22 19:55:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:34.635680 | orchestrator | 2025-06-22 19:55:34 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:34.636816 | orchestrator | 2025-06-22 19:55:34 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:34.639252 | orchestrator | 2025-06-22 19:55:34 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:34.640304 | orchestrator | 2025-06-22 19:55:34 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:34.640367 | orchestrator | 2025-06-22 19:55:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:37.679777 | orchestrator | 2025-06-22 19:55:37 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:37.681387 | orchestrator | 2025-06-22 19:55:37 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:37.685211 | orchestrator | 2025-06-22 19:55:37 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:37.687330 | orchestrator | 2025-06-22 19:55:37 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:37.687356 | orchestrator | 2025-06-22 19:55:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:40.734990 | orchestrator | 2025-06-22 19:55:40 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:40.735078 | orchestrator | 2025-06-22 19:55:40 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:40.735534 | orchestrator | 2025-06-22 19:55:40 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:40.737042 | orchestrator | 2025-06-22 19:55:40 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:40.737429 | orchestrator | 2025-06-22 19:55:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:43.798584 | orchestrator | 2025-06-22 19:55:43 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:43.798683 | orchestrator | 2025-06-22 19:55:43 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:43.799219 | orchestrator | 2025-06-22 19:55:43 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:43.799856 | orchestrator | 2025-06-22 19:55:43 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:43.799879 | orchestrator | 2025-06-22 19:55:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:46.849370 | orchestrator | 2025-06-22 19:55:46 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:46.851406 | orchestrator | 2025-06-22 19:55:46 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:46.852962 | orchestrator | 2025-06-22 19:55:46 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:46.854900 | orchestrator | 2025-06-22 19:55:46 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:46.855637 | orchestrator | 2025-06-22 19:55:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:49.911367 | orchestrator | 2025-06-22 19:55:49 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:49.912352 | orchestrator | 2025-06-22 19:55:49 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:49.914868 | orchestrator | 2025-06-22 19:55:49 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:49.916925 | orchestrator | 2025-06-22 19:55:49 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:49.917523 | orchestrator | 2025-06-22 19:55:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:52.964424 | orchestrator | 2025-06-22 19:55:52 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:52.966796 | orchestrator | 2025-06-22 19:55:52 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:52.968184 | orchestrator | 2025-06-22 19:55:52 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:52.969952 | orchestrator | 2025-06-22 19:55:52 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:52.969986 | orchestrator | 2025-06-22 19:55:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:56.008168 | orchestrator | 2025-06-22 19:55:56 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:56.008264 | orchestrator | 2025-06-22 19:55:56 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:56.008278 | orchestrator | 2025-06-22 19:55:56 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:56.010193 | orchestrator | 2025-06-22 19:55:56 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:56.010231 | orchestrator | 2025-06-22 19:55:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:59.060201 | orchestrator | 2025-06-22 19:55:59 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:55:59.060513 | orchestrator | 2025-06-22 19:55:59 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:55:59.061342 | orchestrator | 2025-06-22 19:55:59 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:55:59.064434 | orchestrator | 2025-06-22 19:55:59 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:55:59.064461 | orchestrator | 2025-06-22 19:55:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:02.098469 | orchestrator | 2025-06-22 19:56:02 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:02.098962 | orchestrator | 2025-06-22 19:56:02 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:02.099881 | orchestrator | 2025-06-22 19:56:02 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:02.101006 | orchestrator | 2025-06-22 19:56:02 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:56:02.101045 | orchestrator | 2025-06-22 19:56:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:05.137796 | orchestrator | 2025-06-22 19:56:05 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:05.138400 | orchestrator | 2025-06-22 19:56:05 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:05.140121 | orchestrator | 2025-06-22 19:56:05 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:05.140147 | orchestrator | 2025-06-22 19:56:05 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:56:05.140159 | orchestrator | 2025-06-22 19:56:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:08.182749 | orchestrator | 2025-06-22 19:56:08 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:08.182874 | orchestrator | 2025-06-22 19:56:08 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:08.183386 | orchestrator | 2025-06-22 19:56:08 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:08.187900 | orchestrator | 2025-06-22 19:56:08 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:56:08.187941 | orchestrator | 2025-06-22 19:56:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:11.226183 | orchestrator | 2025-06-22 19:56:11 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:11.227378 | orchestrator | 2025-06-22 19:56:11 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:11.228618 | orchestrator | 2025-06-22 19:56:11 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:11.230013 | orchestrator | 2025-06-22 19:56:11 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state STARTED 2025-06-22 19:56:11.230074 | orchestrator | 2025-06-22 19:56:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:14.257808 | orchestrator | 2025-06-22 19:56:14 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:14.260650 | orchestrator | 2025-06-22 19:56:14 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:14.260688 | orchestrator | 2025-06-22 19:56:14 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:14.260701 | orchestrator | 2025-06-22 19:56:14 | INFO  | Task 2f05e5df-48c6-47c1-9a44-6e9cf21a2159 is in state SUCCESS 2025-06-22 19:56:14.260713 | orchestrator | 2025-06-22 19:56:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:14.261893 | orchestrator | 2025-06-22 19:56:14.261971 | orchestrator | 2025-06-22 19:56:14.261986 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-22 19:56:14.261997 | orchestrator | 2025-06-22 19:56:14.262009 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-22 19:56:14.262073 | orchestrator | Sunday 22 June 2025 19:55:10 +0000 (0:00:00.147) 0:00:00.147 *********** 2025-06-22 19:56:14.262087 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-22 19:56:14.262098 | orchestrator | 2025-06-22 19:56:14.262109 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-22 19:56:14.262120 | orchestrator | Sunday 22 June 2025 19:55:11 +0000 (0:00:00.772) 0:00:00.919 *********** 2025-06-22 19:56:14.262131 | orchestrator | changed: [testbed-manager] 2025-06-22 19:56:14.262142 | orchestrator | 2025-06-22 19:56:14.262153 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-22 19:56:14.262164 | orchestrator | Sunday 22 June 2025 19:55:12 +0000 (0:00:00.915) 0:00:01.834 *********** 2025-06-22 19:56:14.262175 | orchestrator | changed: [testbed-manager] 2025-06-22 19:56:14.262186 | orchestrator | 2025-06-22 19:56:14.262197 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:56:14.262209 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:56:14.262221 | orchestrator | 2025-06-22 19:56:14.262232 | orchestrator | 2025-06-22 19:56:14.262243 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:56:14.262255 | orchestrator | Sunday 22 June 2025 19:55:12 +0000 (0:00:00.367) 0:00:02.202 *********** 2025-06-22 19:56:14.262266 | orchestrator | =============================================================================== 2025-06-22 19:56:14.262277 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.92s 2025-06-22 19:56:14.262287 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.77s 2025-06-22 19:56:14.262392 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.37s 2025-06-22 19:56:14.262407 | orchestrator | 2025-06-22 19:56:14.262418 | orchestrator | 2025-06-22 19:56:14.262429 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-22 19:56:14.262440 | orchestrator | 2025-06-22 19:56:14.262452 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-22 19:56:14.262465 | orchestrator | Sunday 22 June 2025 19:55:11 +0000 (0:00:00.138) 0:00:00.138 *********** 2025-06-22 19:56:14.262478 | orchestrator | ok: [testbed-manager] 2025-06-22 19:56:14.262491 | orchestrator | 2025-06-22 19:56:14.262503 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-22 19:56:14.262515 | orchestrator | Sunday 22 June 2025 19:55:11 +0000 (0:00:00.529) 0:00:00.667 *********** 2025-06-22 19:56:14.262527 | orchestrator | ok: [testbed-manager] 2025-06-22 19:56:14.262539 | orchestrator | 2025-06-22 19:56:14.262551 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-22 19:56:14.262564 | orchestrator | Sunday 22 June 2025 19:55:12 +0000 (0:00:00.427) 0:00:01.095 *********** 2025-06-22 19:56:14.262576 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-22 19:56:14.262588 | orchestrator | 2025-06-22 19:56:14.262600 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-22 19:56:14.262612 | orchestrator | Sunday 22 June 2025 19:55:12 +0000 (0:00:00.690) 0:00:01.785 *********** 2025-06-22 19:56:14.262624 | orchestrator | changed: [testbed-manager] 2025-06-22 19:56:14.262637 | orchestrator | 2025-06-22 19:56:14.262649 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-22 19:56:14.262661 | orchestrator | Sunday 22 June 2025 19:55:13 +0000 (0:00:00.939) 0:00:02.725 *********** 2025-06-22 19:56:14.262673 | orchestrator | changed: [testbed-manager] 2025-06-22 19:56:14.262686 | orchestrator | 2025-06-22 19:56:14.262698 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-22 19:56:14.262710 | orchestrator | Sunday 22 June 2025 19:55:14 +0000 (0:00:00.820) 0:00:03.546 *********** 2025-06-22 19:56:14.262722 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 19:56:14.262735 | orchestrator | 2025-06-22 19:56:14.262747 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-22 19:56:14.262758 | orchestrator | Sunday 22 June 2025 19:55:15 +0000 (0:00:01.328) 0:00:04.874 *********** 2025-06-22 19:56:14.262769 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 19:56:14.262779 | orchestrator | 2025-06-22 19:56:14.262790 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-22 19:56:14.262801 | orchestrator | Sunday 22 June 2025 19:55:16 +0000 (0:00:00.611) 0:00:05.485 *********** 2025-06-22 19:56:14.262826 | orchestrator | ok: [testbed-manager] 2025-06-22 19:56:14.262837 | orchestrator | 2025-06-22 19:56:14.262848 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-22 19:56:14.262859 | orchestrator | Sunday 22 June 2025 19:55:16 +0000 (0:00:00.368) 0:00:05.854 *********** 2025-06-22 19:56:14.262870 | orchestrator | ok: [testbed-manager] 2025-06-22 19:56:14.262880 | orchestrator | 2025-06-22 19:56:14.262891 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:56:14.262902 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:56:14.262913 | orchestrator | 2025-06-22 19:56:14.262923 | orchestrator | 2025-06-22 19:56:14.262934 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:56:14.262945 | orchestrator | Sunday 22 June 2025 19:55:17 +0000 (0:00:00.279) 0:00:06.134 *********** 2025-06-22 19:56:14.262956 | orchestrator | =============================================================================== 2025-06-22 19:56:14.262966 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.33s 2025-06-22 19:56:14.262977 | orchestrator | Write kubeconfig file --------------------------------------------------- 0.94s 2025-06-22 19:56:14.262995 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.82s 2025-06-22 19:56:14.263022 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2025-06-22 19:56:14.263034 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.61s 2025-06-22 19:56:14.263045 | orchestrator | Get home directory of operator user ------------------------------------- 0.53s 2025-06-22 19:56:14.263055 | orchestrator | Create .kube directory -------------------------------------------------- 0.43s 2025-06-22 19:56:14.263066 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.37s 2025-06-22 19:56:14.263077 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2025-06-22 19:56:14.263088 | orchestrator | 2025-06-22 19:56:14.263099 | orchestrator | 2025-06-22 19:56:14.263110 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-22 19:56:14.263121 | orchestrator | 2025-06-22 19:56:14.263132 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-22 19:56:14.263143 | orchestrator | Sunday 22 June 2025 19:53:55 +0000 (0:00:00.331) 0:00:00.331 *********** 2025-06-22 19:56:14.263154 | orchestrator | ok: [localhost] => { 2025-06-22 19:56:14.263165 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-22 19:56:14.263176 | orchestrator | } 2025-06-22 19:56:14.263187 | orchestrator | 2025-06-22 19:56:14.263198 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-22 19:56:14.263209 | orchestrator | Sunday 22 June 2025 19:53:55 +0000 (0:00:00.077) 0:00:00.408 *********** 2025-06-22 19:56:14.263220 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-22 19:56:14.263233 | orchestrator | ...ignoring 2025-06-22 19:56:14.263244 | orchestrator | 2025-06-22 19:56:14.263255 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-22 19:56:14.263265 | orchestrator | Sunday 22 June 2025 19:53:58 +0000 (0:00:02.980) 0:00:03.388 *********** 2025-06-22 19:56:14.263276 | orchestrator | skipping: [localhost] 2025-06-22 19:56:14.263287 | orchestrator | 2025-06-22 19:56:14.263298 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-22 19:56:14.263330 | orchestrator | Sunday 22 June 2025 19:53:58 +0000 (0:00:00.084) 0:00:03.473 *********** 2025-06-22 19:56:14.263343 | orchestrator | ok: [localhost] 2025-06-22 19:56:14.263354 | orchestrator | 2025-06-22 19:56:14.263365 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:56:14.263375 | orchestrator | 2025-06-22 19:56:14.263386 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:56:14.263397 | orchestrator | Sunday 22 June 2025 19:53:58 +0000 (0:00:00.247) 0:00:03.720 *********** 2025-06-22 19:56:14.263408 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:14.263419 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:14.263429 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:14.263440 | orchestrator | 2025-06-22 19:56:14.263451 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:56:14.263462 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.393) 0:00:04.114 *********** 2025-06-22 19:56:14.263472 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-22 19:56:14.263483 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-22 19:56:14.263494 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-22 19:56:14.263505 | orchestrator | 2025-06-22 19:56:14.263516 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-22 19:56:14.263526 | orchestrator | 2025-06-22 19:56:14.263537 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-22 19:56:14.263548 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.566) 0:00:04.680 *********** 2025-06-22 19:56:14.263567 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:14.263578 | orchestrator | 2025-06-22 19:56:14.263589 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-22 19:56:14.263600 | orchestrator | Sunday 22 June 2025 19:54:00 +0000 (0:00:00.520) 0:00:05.201 *********** 2025-06-22 19:56:14.263610 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:14.263621 | orchestrator | 2025-06-22 19:56:14.263632 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-22 19:56:14.263643 | orchestrator | Sunday 22 June 2025 19:54:01 +0000 (0:00:01.532) 0:00:06.733 *********** 2025-06-22 19:56:14.263653 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:14.263664 | orchestrator | 2025-06-22 19:56:14.263675 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-22 19:56:14.263691 | orchestrator | Sunday 22 June 2025 19:54:02 +0000 (0:00:00.935) 0:00:07.669 *********** 2025-06-22 19:56:14.263702 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:14.263713 | orchestrator | 2025-06-22 19:56:14.263724 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-22 19:56:14.263734 | orchestrator | Sunday 22 June 2025 19:54:03 +0000 (0:00:00.668) 0:00:08.337 *********** 2025-06-22 19:56:14.263745 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:14.263756 | orchestrator | 2025-06-22 19:56:14.263766 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-22 19:56:14.263777 | orchestrator | Sunday 22 June 2025 19:54:03 +0000 (0:00:00.277) 0:00:08.615 *********** 2025-06-22 19:56:14.263788 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:14.263799 | orchestrator | 2025-06-22 19:56:14.263809 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-22 19:56:14.263820 | orchestrator | Sunday 22 June 2025 19:54:04 +0000 (0:00:00.393) 0:00:09.008 *********** 2025-06-22 19:56:14.263831 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:14.263842 | orchestrator | 2025-06-22 19:56:14.263852 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-22 19:56:14.263870 | orchestrator | Sunday 22 June 2025 19:54:04 +0000 (0:00:00.771) 0:00:09.780 *********** 2025-06-22 19:56:14.263881 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:14.263892 | orchestrator | 2025-06-22 19:56:14.263902 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-22 19:56:14.263913 | orchestrator | Sunday 22 June 2025 19:54:05 +0000 (0:00:01.071) 0:00:10.851 *********** 2025-06-22 19:56:14.263924 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:14.263934 | orchestrator | 2025-06-22 19:56:14.263945 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-22 19:56:14.263956 | orchestrator | Sunday 22 June 2025 19:54:06 +0000 (0:00:00.426) 0:00:11.278 *********** 2025-06-22 19:56:14.263967 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:14.263977 | orchestrator | 2025-06-22 19:56:14.263988 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-22 19:56:14.263998 | orchestrator | Sunday 22 June 2025 19:54:06 +0000 (0:00:00.383) 0:00:11.661 *********** 2025-06-22 19:56:14.264015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:56:14.264039 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:56:14.264057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:56:14.264070 | orchestrator | 2025-06-22 19:56:14.264081 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-22 19:56:14.264092 | orchestrator | Sunday 22 June 2025 19:54:07 +0000 (0:00:00.873) 0:00:12.534 *********** 2025-06-22 19:56:14.264111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:56:14.264124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:56:14.264144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:56:14.264156 | orchestrator | 2025-06-22 19:56:14.264166 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-22 19:56:14.264182 | orchestrator | Sunday 22 June 2025 19:54:09 +0000 (0:00:01.558) 0:00:14.093 *********** 2025-06-22 19:56:14.264193 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-22 19:56:14.264204 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-22 19:56:14.264215 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-22 19:56:14.264226 | orchestrator | 2025-06-22 19:56:14.264236 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-22 19:56:14.264247 | orchestrator | Sunday 22 June 2025 19:54:11 +0000 (0:00:02.025) 0:00:16.118 *********** 2025-06-22 19:56:14.264258 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-22 19:56:14.264268 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-22 19:56:14.264279 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-22 19:56:14.264290 | orchestrator | 2025-06-22 19:56:14.264300 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-22 19:56:14.264340 | orchestrator | Sunday 22 June 2025 19:54:13 +0000 (0:00:02.571) 0:00:18.690 *********** 2025-06-22 19:56:14.264353 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-22 19:56:14.264363 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-22 19:56:14.264374 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-22 19:56:14.264385 | orchestrator | 2025-06-22 19:56:14.264396 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-22 19:56:14.264406 | orchestrator | Sunday 22 June 2025 19:54:15 +0000 (0:00:01.442) 0:00:20.132 *********** 2025-06-22 19:56:14.264417 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-22 19:56:14.264434 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-22 19:56:14.264445 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-22 19:56:14.264456 | orchestrator | 2025-06-22 19:56:14.264467 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-22 19:56:14.264478 | orchestrator | Sunday 22 June 2025 19:54:17 +0000 (0:00:01.807) 0:00:21.939 *********** 2025-06-22 19:56:14.264488 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-22 19:56:14.264499 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-22 19:56:14.264510 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-22 19:56:14.264521 | orchestrator | 2025-06-22 19:56:14.264532 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-22 19:56:14.264543 | orchestrator | Sunday 22 June 2025 19:54:19 +0000 (0:00:02.582) 0:00:24.521 *********** 2025-06-22 19:56:14.264553 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-22 19:56:14.264564 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-22 19:56:14.264575 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-22 19:56:14.264585 | orchestrator | 2025-06-22 19:56:14.264596 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-22 19:56:14.264607 | orchestrator | Sunday 22 June 2025 19:54:21 +0000 (0:00:01.472) 0:00:25.994 *********** 2025-06-22 19:56:14.264617 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:14.264628 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:14.264639 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:14.264649 | orchestrator | 2025-06-22 19:56:14.264660 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-22 19:56:14.264671 | orchestrator | Sunday 22 June 2025 19:54:21 +0000 (0:00:00.389) 0:00:26.384 *********** 2025-06-22 19:56:14.264683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:56:14.264702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:56:14.264797 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:56:14.264818 | orchestrator | 2025-06-22 19:56:14.264829 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-22 19:56:14.264840 | orchestrator | Sunday 22 June 2025 19:54:22 +0000 (0:00:01.385) 0:00:27.769 *********** 2025-06-22 19:56:14.264850 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:14.264861 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:14.264872 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:14.264883 | orchestrator | 2025-06-22 19:56:14.264893 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-22 19:56:14.264904 | orchestrator | Sunday 22 June 2025 19:54:23 +0000 (0:00:00.992) 0:00:28.762 *********** 2025-06-22 19:56:14.264915 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:14.264926 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:14.264936 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:14.264947 | orchestrator | 2025-06-22 19:56:14.264958 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-22 19:56:14.264968 | orchestrator | Sunday 22 June 2025 19:54:29 +0000 (0:00:05.887) 0:00:34.649 *********** 2025-06-22 19:56:14.264979 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:14.264990 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:14.265000 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:14.265011 | orchestrator | 2025-06-22 19:56:14.265022 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-22 19:56:14.265033 | orchestrator | 2025-06-22 19:56:14.265044 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-22 19:56:14.265055 | orchestrator | Sunday 22 June 2025 19:54:30 +0000 (0:00:00.353) 0:00:35.003 *********** 2025-06-22 19:56:14.265066 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:14.265076 | orchestrator | 2025-06-22 19:56:14.265087 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-22 19:56:14.265098 | orchestrator | Sunday 22 June 2025 19:54:30 +0000 (0:00:00.648) 0:00:35.651 *********** 2025-06-22 19:56:14.265109 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:14.265120 | orchestrator | 2025-06-22 19:56:14.265131 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-22 19:56:14.265141 | orchestrator | Sunday 22 June 2025 19:54:31 +0000 (0:00:00.357) 0:00:36.009 *********** 2025-06-22 19:56:14.265152 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:14.265163 | orchestrator | 2025-06-22 19:56:14.265174 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-22 19:56:14.265184 | orchestrator | Sunday 22 June 2025 19:54:33 +0000 (0:00:02.041) 0:00:38.051 *********** 2025-06-22 19:56:14.265202 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:14.265213 | orchestrator | 2025-06-22 19:56:14.265223 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-22 19:56:14.265234 | orchestrator | 2025-06-22 19:56:14.265245 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-22 19:56:14.265256 | orchestrator | Sunday 22 June 2025 19:55:31 +0000 (0:00:58.215) 0:01:36.266 *********** 2025-06-22 19:56:14.265271 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:14.265282 | orchestrator | 2025-06-22 19:56:14.265293 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-22 19:56:14.265304 | orchestrator | Sunday 22 June 2025 19:55:32 +0000 (0:00:00.615) 0:01:36.882 *********** 2025-06-22 19:56:14.265343 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:14.265362 | orchestrator | 2025-06-22 19:56:14.265381 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-22 19:56:14.265398 | orchestrator | Sunday 22 June 2025 19:55:32 +0000 (0:00:00.314) 0:01:37.196 *********** 2025-06-22 19:56:14.265416 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:14.265427 | orchestrator | 2025-06-22 19:56:14.265438 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-22 19:56:14.265449 | orchestrator | Sunday 22 June 2025 19:55:34 +0000 (0:00:01.871) 0:01:39.068 *********** 2025-06-22 19:56:14.265460 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:14.265470 | orchestrator | 2025-06-22 19:56:14.265481 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-22 19:56:14.265492 | orchestrator | 2025-06-22 19:56:14.265503 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-22 19:56:14.265514 | orchestrator | Sunday 22 June 2025 19:55:50 +0000 (0:00:16.209) 0:01:55.278 *********** 2025-06-22 19:56:14.265525 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:14.265536 | orchestrator | 2025-06-22 19:56:14.265555 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-22 19:56:14.265566 | orchestrator | Sunday 22 June 2025 19:55:51 +0000 (0:00:00.613) 0:01:55.891 *********** 2025-06-22 19:56:14.265577 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:14.265589 | orchestrator | 2025-06-22 19:56:14.265600 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-22 19:56:14.265611 | orchestrator | Sunday 22 June 2025 19:55:51 +0000 (0:00:00.255) 0:01:56.147 *********** 2025-06-22 19:56:14.265622 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:14.265633 | orchestrator | 2025-06-22 19:56:14.265643 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-22 19:56:14.265654 | orchestrator | Sunday 22 June 2025 19:55:58 +0000 (0:00:06.723) 0:02:02.870 *********** 2025-06-22 19:56:14.265665 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:14.265676 | orchestrator | 2025-06-22 19:56:14.265687 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-22 19:56:14.265698 | orchestrator | 2025-06-22 19:56:14.265709 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-22 19:56:14.265719 | orchestrator | Sunday 22 June 2025 19:56:09 +0000 (0:00:11.601) 0:02:14.472 *********** 2025-06-22 19:56:14.265730 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:14.265741 | orchestrator | 2025-06-22 19:56:14.265751 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-22 19:56:14.265762 | orchestrator | Sunday 22 June 2025 19:56:10 +0000 (0:00:00.702) 0:02:15.175 *********** 2025-06-22 19:56:14.265773 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-22 19:56:14.265783 | orchestrator | enable_outward_rabbitmq_True 2025-06-22 19:56:14.265794 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-22 19:56:14.265805 | orchestrator | outward_rabbitmq_restart 2025-06-22 19:56:14.265816 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:14.265827 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:14.265845 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:14.265856 | orchestrator | 2025-06-22 19:56:14.265866 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-22 19:56:14.265877 | orchestrator | skipping: no hosts matched 2025-06-22 19:56:14.265888 | orchestrator | 2025-06-22 19:56:14.265899 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-22 19:56:14.265910 | orchestrator | skipping: no hosts matched 2025-06-22 19:56:14.265921 | orchestrator | 2025-06-22 19:56:14.265932 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-22 19:56:14.265942 | orchestrator | skipping: no hosts matched 2025-06-22 19:56:14.265953 | orchestrator | 2025-06-22 19:56:14.265964 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:56:14.265975 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-22 19:56:14.265986 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 19:56:14.265998 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:56:14.266009 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:56:14.266079 | orchestrator | 2025-06-22 19:56:14.266092 | orchestrator | 2025-06-22 19:56:14.266103 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:56:14.266114 | orchestrator | Sunday 22 June 2025 19:56:13 +0000 (0:00:02.691) 0:02:17.866 *********** 2025-06-22 19:56:14.266125 | orchestrator | =============================================================================== 2025-06-22 19:56:14.266135 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 86.03s 2025-06-22 19:56:14.266146 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.64s 2025-06-22 19:56:14.266157 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 5.89s 2025-06-22 19:56:14.266167 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.98s 2025-06-22 19:56:14.266178 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.69s 2025-06-22 19:56:14.266194 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.58s 2025-06-22 19:56:14.266206 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.57s 2025-06-22 19:56:14.266216 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.03s 2025-06-22 19:56:14.266227 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.88s 2025-06-22 19:56:14.266238 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.81s 2025-06-22 19:56:14.266249 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.56s 2025-06-22 19:56:14.266259 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.53s 2025-06-22 19:56:14.266270 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.47s 2025-06-22 19:56:14.266281 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.44s 2025-06-22 19:56:14.266292 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.39s 2025-06-22 19:56:14.266302 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.07s 2025-06-22 19:56:14.266351 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.99s 2025-06-22 19:56:14.266370 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 0.94s 2025-06-22 19:56:14.266381 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.93s 2025-06-22 19:56:14.266391 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.87s 2025-06-22 19:56:17.299972 | orchestrator | 2025-06-22 19:56:17 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:17.302441 | orchestrator | 2025-06-22 19:56:17 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:17.304493 | orchestrator | 2025-06-22 19:56:17 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:17.304530 | orchestrator | 2025-06-22 19:56:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:20.342436 | orchestrator | 2025-06-22 19:56:20 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:20.342686 | orchestrator | 2025-06-22 19:56:20 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:20.343831 | orchestrator | 2025-06-22 19:56:20 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:20.343933 | orchestrator | 2025-06-22 19:56:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:23.373967 | orchestrator | 2025-06-22 19:56:23 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:23.374539 | orchestrator | 2025-06-22 19:56:23 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:23.375646 | orchestrator | 2025-06-22 19:56:23 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:23.375693 | orchestrator | 2025-06-22 19:56:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:26.414349 | orchestrator | 2025-06-22 19:56:26 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:26.414879 | orchestrator | 2025-06-22 19:56:26 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:26.415600 | orchestrator | 2025-06-22 19:56:26 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:26.415716 | orchestrator | 2025-06-22 19:56:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:29.450741 | orchestrator | 2025-06-22 19:56:29 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:29.451999 | orchestrator | 2025-06-22 19:56:29 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:29.452948 | orchestrator | 2025-06-22 19:56:29 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:29.452983 | orchestrator | 2025-06-22 19:56:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:32.502373 | orchestrator | 2025-06-22 19:56:32 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:32.502593 | orchestrator | 2025-06-22 19:56:32 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:32.504619 | orchestrator | 2025-06-22 19:56:32 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:32.505245 | orchestrator | 2025-06-22 19:56:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:35.552441 | orchestrator | 2025-06-22 19:56:35 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:35.553917 | orchestrator | 2025-06-22 19:56:35 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:35.554010 | orchestrator | 2025-06-22 19:56:35 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:35.554078 | orchestrator | 2025-06-22 19:56:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:38.583976 | orchestrator | 2025-06-22 19:56:38 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:38.584942 | orchestrator | 2025-06-22 19:56:38 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:38.586540 | orchestrator | 2025-06-22 19:56:38 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:38.586588 | orchestrator | 2025-06-22 19:56:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:41.622435 | orchestrator | 2025-06-22 19:56:41 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:41.623489 | orchestrator | 2025-06-22 19:56:41 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:41.623601 | orchestrator | 2025-06-22 19:56:41 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:41.623623 | orchestrator | 2025-06-22 19:56:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:44.677780 | orchestrator | 2025-06-22 19:56:44 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:44.679673 | orchestrator | 2025-06-22 19:56:44 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:44.681404 | orchestrator | 2025-06-22 19:56:44 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:44.681633 | orchestrator | 2025-06-22 19:56:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:47.733299 | orchestrator | 2025-06-22 19:56:47 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:47.734806 | orchestrator | 2025-06-22 19:56:47 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:47.736605 | orchestrator | 2025-06-22 19:56:47 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:47.736879 | orchestrator | 2025-06-22 19:56:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:50.783546 | orchestrator | 2025-06-22 19:56:50 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:50.788634 | orchestrator | 2025-06-22 19:56:50 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:50.789219 | orchestrator | 2025-06-22 19:56:50 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:50.789251 | orchestrator | 2025-06-22 19:56:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:53.843729 | orchestrator | 2025-06-22 19:56:53 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:53.845372 | orchestrator | 2025-06-22 19:56:53 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:53.849340 | orchestrator | 2025-06-22 19:56:53 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:53.849378 | orchestrator | 2025-06-22 19:56:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:56.897339 | orchestrator | 2025-06-22 19:56:56 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:56.897550 | orchestrator | 2025-06-22 19:56:56 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:56.897840 | orchestrator | 2025-06-22 19:56:56 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:56.897865 | orchestrator | 2025-06-22 19:56:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:59.937873 | orchestrator | 2025-06-22 19:56:59 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:56:59.938535 | orchestrator | 2025-06-22 19:56:59 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:56:59.938999 | orchestrator | 2025-06-22 19:56:59 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:56:59.939034 | orchestrator | 2025-06-22 19:56:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:02.991873 | orchestrator | 2025-06-22 19:57:02 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:02.993544 | orchestrator | 2025-06-22 19:57:02 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:02.996890 | orchestrator | 2025-06-22 19:57:02 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:57:02.996922 | orchestrator | 2025-06-22 19:57:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:06.060903 | orchestrator | 2025-06-22 19:57:06 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:06.064702 | orchestrator | 2025-06-22 19:57:06 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:06.066380 | orchestrator | 2025-06-22 19:57:06 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:57:06.066851 | orchestrator | 2025-06-22 19:57:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:09.117761 | orchestrator | 2025-06-22 19:57:09 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:09.117863 | orchestrator | 2025-06-22 19:57:09 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:09.118633 | orchestrator | 2025-06-22 19:57:09 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state STARTED 2025-06-22 19:57:09.118663 | orchestrator | 2025-06-22 19:57:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:12.149387 | orchestrator | 2025-06-22 19:57:12 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:12.149478 | orchestrator | 2025-06-22 19:57:12 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:12.151079 | orchestrator | 2025-06-22 19:57:12 | INFO  | Task 4439d1ce-c0e6-4a00-9ee7-70f3fcb19045 is in state SUCCESS 2025-06-22 19:57:12.153632 | orchestrator | 2025-06-22 19:57:12.153673 | orchestrator | 2025-06-22 19:57:12.153685 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:57:12.153698 | orchestrator | 2025-06-22 19:57:12.153712 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:57:12.153732 | orchestrator | Sunday 22 June 2025 19:54:48 +0000 (0:00:00.550) 0:00:00.550 *********** 2025-06-22 19:57:12.153751 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:57:12.153770 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:57:12.153788 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:57:12.153806 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.153824 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.153843 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.153858 | orchestrator | 2025-06-22 19:57:12.153869 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:57:12.153880 | orchestrator | Sunday 22 June 2025 19:54:49 +0000 (0:00:01.060) 0:00:01.610 *********** 2025-06-22 19:57:12.153891 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-22 19:57:12.153902 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-22 19:57:12.153913 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-22 19:57:12.153924 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-22 19:57:12.153934 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-22 19:57:12.153988 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-22 19:57:12.154000 | orchestrator | 2025-06-22 19:57:12.154010 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-22 19:57:12.154085 | orchestrator | 2025-06-22 19:57:12.154096 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-22 19:57:12.154107 | orchestrator | Sunday 22 June 2025 19:54:50 +0000 (0:00:00.989) 0:00:02.600 *********** 2025-06-22 19:57:12.154119 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:57:12.154131 | orchestrator | 2025-06-22 19:57:12.154142 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-22 19:57:12.154154 | orchestrator | Sunday 22 June 2025 19:54:51 +0000 (0:00:01.507) 0:00:04.107 *********** 2025-06-22 19:57:12.154167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154180 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154200 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154226 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154252 | orchestrator | 2025-06-22 19:57:12.154277 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-22 19:57:12.154290 | orchestrator | Sunday 22 June 2025 19:54:53 +0000 (0:00:01.719) 0:00:05.826 *********** 2025-06-22 19:57:12.154364 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154387 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154400 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154453 | orchestrator | 2025-06-22 19:57:12.154465 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-22 19:57:12.154476 | orchestrator | Sunday 22 June 2025 19:54:55 +0000 (0:00:01.675) 0:00:07.502 *********** 2025-06-22 19:57:12.154487 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154517 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154568 | orchestrator | 2025-06-22 19:57:12.154579 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-22 19:57:12.154591 | orchestrator | Sunday 22 June 2025 19:54:56 +0000 (0:00:01.503) 0:00:09.006 *********** 2025-06-22 19:57:12.154602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154613 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154629 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154680 | orchestrator | 2025-06-22 19:57:12.154696 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-22 19:57:12.154708 | orchestrator | Sunday 22 June 2025 19:54:58 +0000 (0:00:02.168) 0:00:11.174 *********** 2025-06-22 19:57:12.154719 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154730 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.154791 | orchestrator | 2025-06-22 19:57:12.154802 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-22 19:57:12.154814 | orchestrator | Sunday 22 June 2025 19:55:01 +0000 (0:00:02.458) 0:00:13.633 *********** 2025-06-22 19:57:12.154825 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:57:12.154836 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:57:12.154847 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:57:12.154858 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:57:12.154869 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:57:12.154880 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:57:12.154891 | orchestrator | 2025-06-22 19:57:12.154901 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-22 19:57:12.154918 | orchestrator | Sunday 22 June 2025 19:55:04 +0000 (0:00:02.907) 0:00:16.540 *********** 2025-06-22 19:57:12.154929 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-22 19:57:12.154940 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-22 19:57:12.154951 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-22 19:57:12.154961 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-22 19:57:12.154972 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-22 19:57:12.154983 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-22 19:57:12.154993 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:57:12.155004 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:57:12.155020 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:57:12.155032 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:57:12.155043 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:57:12.155053 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:57:12.155065 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:57:12.155076 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:57:12.155087 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:57:12.155099 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:57:12.155110 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:57:12.155121 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:57:12.155132 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:57:12.155143 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:57:12.155154 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:57:12.155165 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:57:12.155176 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:57:12.155187 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:57:12.155198 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:57:12.155209 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:57:12.155220 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:57:12.155231 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:57:12.155242 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:57:12.155260 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:57:12.155272 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:57:12.155287 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:57:12.155329 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:57:12.155341 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:57:12.155352 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:57:12.155363 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:57:12.155373 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-22 19:57:12.155384 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-22 19:57:12.155395 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-22 19:57:12.155406 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-22 19:57:12.155417 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-22 19:57:12.155428 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-22 19:57:12.155438 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-22 19:57:12.155450 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-22 19:57:12.155466 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-22 19:57:12.155478 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-22 19:57:12.155488 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-22 19:57:12.155499 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-22 19:57:12.155510 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-22 19:57:12.155521 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-22 19:57:12.155532 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-22 19:57:12.155543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-22 19:57:12.155554 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-22 19:57:12.155565 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-22 19:57:12.155576 | orchestrator | 2025-06-22 19:57:12.155587 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:57:12.155598 | orchestrator | Sunday 22 June 2025 19:55:23 +0000 (0:00:19.707) 0:00:36.248 *********** 2025-06-22 19:57:12.155609 | orchestrator | 2025-06-22 19:57:12.155620 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:57:12.155638 | orchestrator | Sunday 22 June 2025 19:55:23 +0000 (0:00:00.069) 0:00:36.317 *********** 2025-06-22 19:57:12.155649 | orchestrator | 2025-06-22 19:57:12.155660 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:57:12.155671 | orchestrator | Sunday 22 June 2025 19:55:23 +0000 (0:00:00.063) 0:00:36.381 *********** 2025-06-22 19:57:12.155682 | orchestrator | 2025-06-22 19:57:12.155693 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:57:12.155704 | orchestrator | Sunday 22 June 2025 19:55:24 +0000 (0:00:00.085) 0:00:36.467 *********** 2025-06-22 19:57:12.155714 | orchestrator | 2025-06-22 19:57:12.155725 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:57:12.155736 | orchestrator | Sunday 22 June 2025 19:55:24 +0000 (0:00:00.069) 0:00:36.537 *********** 2025-06-22 19:57:12.155747 | orchestrator | 2025-06-22 19:57:12.155758 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:57:12.155769 | orchestrator | Sunday 22 June 2025 19:55:24 +0000 (0:00:00.064) 0:00:36.602 *********** 2025-06-22 19:57:12.155780 | orchestrator | 2025-06-22 19:57:12.155790 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-22 19:57:12.155801 | orchestrator | Sunday 22 June 2025 19:55:24 +0000 (0:00:00.067) 0:00:36.669 *********** 2025-06-22 19:57:12.155812 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:57:12.155823 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.155834 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:57:12.155845 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.155856 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:57:12.155867 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.155878 | orchestrator | 2025-06-22 19:57:12.155893 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-22 19:57:12.155904 | orchestrator | Sunday 22 June 2025 19:55:26 +0000 (0:00:01.872) 0:00:38.541 *********** 2025-06-22 19:57:12.155915 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:57:12.155926 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:57:12.155937 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:57:12.155948 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:57:12.155958 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:57:12.155969 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:57:12.155980 | orchestrator | 2025-06-22 19:57:12.155991 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-22 19:57:12.156001 | orchestrator | 2025-06-22 19:57:12.156012 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-22 19:57:12.156023 | orchestrator | Sunday 22 June 2025 19:55:54 +0000 (0:00:28.747) 0:01:07.288 *********** 2025-06-22 19:57:12.156034 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:57:12.156045 | orchestrator | 2025-06-22 19:57:12.156056 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-22 19:57:12.156066 | orchestrator | Sunday 22 June 2025 19:55:55 +0000 (0:00:00.579) 0:01:07.868 *********** 2025-06-22 19:57:12.156077 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:57:12.156088 | orchestrator | 2025-06-22 19:57:12.156099 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-22 19:57:12.156110 | orchestrator | Sunday 22 June 2025 19:55:56 +0000 (0:00:00.670) 0:01:08.538 *********** 2025-06-22 19:57:12.156121 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.156132 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.156142 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.156153 | orchestrator | 2025-06-22 19:57:12.156164 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-22 19:57:12.156175 | orchestrator | Sunday 22 June 2025 19:55:56 +0000 (0:00:00.768) 0:01:09.307 *********** 2025-06-22 19:57:12.156186 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.156202 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.156213 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.156229 | orchestrator | 2025-06-22 19:57:12.156240 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-22 19:57:12.156251 | orchestrator | Sunday 22 June 2025 19:55:57 +0000 (0:00:00.324) 0:01:09.631 *********** 2025-06-22 19:57:12.156262 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.156273 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.156284 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.156295 | orchestrator | 2025-06-22 19:57:12.156319 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-22 19:57:12.156330 | orchestrator | Sunday 22 June 2025 19:55:57 +0000 (0:00:00.320) 0:01:09.951 *********** 2025-06-22 19:57:12.156341 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.156351 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.156362 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.156373 | orchestrator | 2025-06-22 19:57:12.156384 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-22 19:57:12.156394 | orchestrator | Sunday 22 June 2025 19:55:58 +0000 (0:00:00.532) 0:01:10.483 *********** 2025-06-22 19:57:12.156405 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.156416 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.156426 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.156437 | orchestrator | 2025-06-22 19:57:12.156448 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-22 19:57:12.156459 | orchestrator | Sunday 22 June 2025 19:55:58 +0000 (0:00:00.360) 0:01:10.844 *********** 2025-06-22 19:57:12.156470 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.156481 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.156491 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.156502 | orchestrator | 2025-06-22 19:57:12.156513 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-22 19:57:12.156524 | orchestrator | Sunday 22 June 2025 19:55:58 +0000 (0:00:00.283) 0:01:11.128 *********** 2025-06-22 19:57:12.156535 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.156545 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.156556 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.156567 | orchestrator | 2025-06-22 19:57:12.156578 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-22 19:57:12.156589 | orchestrator | Sunday 22 June 2025 19:55:59 +0000 (0:00:00.295) 0:01:11.423 *********** 2025-06-22 19:57:12.156600 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.156611 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.156622 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.156632 | orchestrator | 2025-06-22 19:57:12.156643 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-22 19:57:12.156654 | orchestrator | Sunday 22 June 2025 19:55:59 +0000 (0:00:00.482) 0:01:11.906 *********** 2025-06-22 19:57:12.156665 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.156676 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.156687 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.156697 | orchestrator | 2025-06-22 19:57:12.156708 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-22 19:57:12.156719 | orchestrator | Sunday 22 June 2025 19:55:59 +0000 (0:00:00.280) 0:01:12.186 *********** 2025-06-22 19:57:12.156730 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.156741 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.156752 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.156763 | orchestrator | 2025-06-22 19:57:12.156774 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-22 19:57:12.156785 | orchestrator | Sunday 22 June 2025 19:56:00 +0000 (0:00:00.292) 0:01:12.479 *********** 2025-06-22 19:57:12.156796 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.156806 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.156817 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.156839 | orchestrator | 2025-06-22 19:57:12.156851 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-22 19:57:12.156866 | orchestrator | Sunday 22 June 2025 19:56:00 +0000 (0:00:00.281) 0:01:12.760 *********** 2025-06-22 19:57:12.156877 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.156888 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.156899 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.156910 | orchestrator | 2025-06-22 19:57:12.156921 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-22 19:57:12.156932 | orchestrator | Sunday 22 June 2025 19:56:00 +0000 (0:00:00.450) 0:01:13.211 *********** 2025-06-22 19:57:12.156943 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.156953 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.156964 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.156975 | orchestrator | 2025-06-22 19:57:12.156986 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-22 19:57:12.156997 | orchestrator | Sunday 22 June 2025 19:56:01 +0000 (0:00:00.305) 0:01:13.517 *********** 2025-06-22 19:57:12.157008 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.157018 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.157029 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.157040 | orchestrator | 2025-06-22 19:57:12.157051 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-22 19:57:12.157062 | orchestrator | Sunday 22 June 2025 19:56:01 +0000 (0:00:00.315) 0:01:13.833 *********** 2025-06-22 19:57:12.157072 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.157083 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.157094 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.157105 | orchestrator | 2025-06-22 19:57:12.157115 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-22 19:57:12.157126 | orchestrator | Sunday 22 June 2025 19:56:01 +0000 (0:00:00.278) 0:01:14.112 *********** 2025-06-22 19:57:12.157137 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.157148 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.157158 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.157169 | orchestrator | 2025-06-22 19:57:12.157180 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-22 19:57:12.157191 | orchestrator | Sunday 22 June 2025 19:56:02 +0000 (0:00:00.454) 0:01:14.566 *********** 2025-06-22 19:57:12.157202 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.157213 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.157229 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.157241 | orchestrator | 2025-06-22 19:57:12.157251 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-22 19:57:12.157263 | orchestrator | Sunday 22 June 2025 19:56:02 +0000 (0:00:00.289) 0:01:14.856 *********** 2025-06-22 19:57:12.157273 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:57:12.157284 | orchestrator | 2025-06-22 19:57:12.157295 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-22 19:57:12.157320 | orchestrator | Sunday 22 June 2025 19:56:03 +0000 (0:00:00.555) 0:01:15.411 *********** 2025-06-22 19:57:12.157332 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.157342 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.157358 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.157378 | orchestrator | 2025-06-22 19:57:12.157395 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-22 19:57:12.157406 | orchestrator | Sunday 22 June 2025 19:56:03 +0000 (0:00:00.852) 0:01:16.263 *********** 2025-06-22 19:57:12.157417 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.157428 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.157439 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.157449 | orchestrator | 2025-06-22 19:57:12.157460 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-22 19:57:12.157478 | orchestrator | Sunday 22 June 2025 19:56:04 +0000 (0:00:00.505) 0:01:16.769 *********** 2025-06-22 19:57:12.157489 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.157500 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.157511 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.157521 | orchestrator | 2025-06-22 19:57:12.157532 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-22 19:57:12.157543 | orchestrator | Sunday 22 June 2025 19:56:04 +0000 (0:00:00.422) 0:01:17.191 *********** 2025-06-22 19:57:12.157554 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.157564 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.157575 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.157586 | orchestrator | 2025-06-22 19:57:12.157597 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-22 19:57:12.157607 | orchestrator | Sunday 22 June 2025 19:56:05 +0000 (0:00:00.393) 0:01:17.585 *********** 2025-06-22 19:57:12.157618 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.157629 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.157639 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.157650 | orchestrator | 2025-06-22 19:57:12.157661 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-22 19:57:12.157672 | orchestrator | Sunday 22 June 2025 19:56:05 +0000 (0:00:00.395) 0:01:17.981 *********** 2025-06-22 19:57:12.157684 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.157702 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.157719 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.157732 | orchestrator | 2025-06-22 19:57:12.157743 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-22 19:57:12.157754 | orchestrator | Sunday 22 June 2025 19:56:06 +0000 (0:00:00.593) 0:01:18.574 *********** 2025-06-22 19:57:12.157765 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.157776 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.157786 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.157797 | orchestrator | 2025-06-22 19:57:12.157808 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-22 19:57:12.157819 | orchestrator | Sunday 22 June 2025 19:56:06 +0000 (0:00:00.330) 0:01:18.905 *********** 2025-06-22 19:57:12.157829 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.157840 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.157851 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.157861 | orchestrator | 2025-06-22 19:57:12.157872 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-22 19:57:12.157889 | orchestrator | Sunday 22 June 2025 19:56:06 +0000 (0:00:00.311) 0:01:19.217 *********** 2025-06-22 19:57:12.157908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.157921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.157933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.157951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.157970 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.157982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.157994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158045 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158059 | orchestrator | 2025-06-22 19:57:12.158071 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-22 19:57:12.158082 | orchestrator | Sunday 22 June 2025 19:56:08 +0000 (0:00:01.396) 0:01:20.613 *********** 2025-06-22 19:57:12.158098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158213 | orchestrator | 2025-06-22 19:57:12.158224 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-22 19:57:12.158234 | orchestrator | Sunday 22 June 2025 19:56:12 +0000 (0:00:03.908) 0:01:24.521 *********** 2025-06-22 19:57:12.158246 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.158386 | orchestrator | 2025-06-22 19:57:12.158397 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:57:12.158408 | orchestrator | Sunday 22 June 2025 19:56:14 +0000 (0:00:02.109) 0:01:26.631 *********** 2025-06-22 19:57:12.158419 | orchestrator | 2025-06-22 19:57:12.158430 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:57:12.158441 | orchestrator | Sunday 22 June 2025 19:56:14 +0000 (0:00:00.070) 0:01:26.702 *********** 2025-06-22 19:57:12.158451 | orchestrator | 2025-06-22 19:57:12.158462 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:57:12.158473 | orchestrator | Sunday 22 June 2025 19:56:14 +0000 (0:00:00.065) 0:01:26.767 *********** 2025-06-22 19:57:12.158484 | orchestrator | 2025-06-22 19:57:12.158494 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-22 19:57:12.158505 | orchestrator | Sunday 22 June 2025 19:56:14 +0000 (0:00:00.070) 0:01:26.838 *********** 2025-06-22 19:57:12.158516 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:57:12.158527 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:57:12.158538 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:57:12.158549 | orchestrator | 2025-06-22 19:57:12.158560 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-22 19:57:12.158570 | orchestrator | Sunday 22 June 2025 19:56:21 +0000 (0:00:07.509) 0:01:34.348 *********** 2025-06-22 19:57:12.158581 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:57:12.158592 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:57:12.158603 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:57:12.158619 | orchestrator | 2025-06-22 19:57:12.158630 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-22 19:57:12.158703 | orchestrator | Sunday 22 June 2025 19:56:24 +0000 (0:00:02.856) 0:01:37.204 *********** 2025-06-22 19:57:12.158723 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:57:12.158734 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:57:12.158750 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:57:12.158761 | orchestrator | 2025-06-22 19:57:12.158772 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-22 19:57:12.158783 | orchestrator | Sunday 22 June 2025 19:56:32 +0000 (0:00:07.861) 0:01:45.066 *********** 2025-06-22 19:57:12.158794 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.158804 | orchestrator | 2025-06-22 19:57:12.158815 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-22 19:57:12.158826 | orchestrator | Sunday 22 June 2025 19:56:32 +0000 (0:00:00.125) 0:01:45.192 *********** 2025-06-22 19:57:12.158837 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.158848 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.158859 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.158869 | orchestrator | 2025-06-22 19:57:12.158880 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-22 19:57:12.158891 | orchestrator | Sunday 22 June 2025 19:56:33 +0000 (0:00:00.782) 0:01:45.974 *********** 2025-06-22 19:57:12.158901 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.158912 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.158923 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:57:12.158934 | orchestrator | 2025-06-22 19:57:12.158945 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-22 19:57:12.158956 | orchestrator | Sunday 22 June 2025 19:56:34 +0000 (0:00:00.832) 0:01:46.807 *********** 2025-06-22 19:57:12.158966 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.158977 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.158988 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.158999 | orchestrator | 2025-06-22 19:57:12.159009 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-22 19:57:12.159020 | orchestrator | Sunday 22 June 2025 19:56:35 +0000 (0:00:00.774) 0:01:47.582 *********** 2025-06-22 19:57:12.159031 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.159041 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.159052 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:57:12.159063 | orchestrator | 2025-06-22 19:57:12.159074 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-22 19:57:12.159085 | orchestrator | Sunday 22 June 2025 19:56:35 +0000 (0:00:00.741) 0:01:48.323 *********** 2025-06-22 19:57:12.159096 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.159106 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.159124 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.159136 | orchestrator | 2025-06-22 19:57:12.159147 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-22 19:57:12.159158 | orchestrator | Sunday 22 June 2025 19:56:36 +0000 (0:00:00.831) 0:01:49.155 *********** 2025-06-22 19:57:12.159168 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.159179 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.159190 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.159200 | orchestrator | 2025-06-22 19:57:12.159211 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-22 19:57:12.159222 | orchestrator | Sunday 22 June 2025 19:56:38 +0000 (0:00:01.311) 0:01:50.466 *********** 2025-06-22 19:57:12.159233 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.159243 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.159254 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.159264 | orchestrator | 2025-06-22 19:57:12.159275 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-22 19:57:12.159286 | orchestrator | Sunday 22 June 2025 19:56:38 +0000 (0:00:00.343) 0:01:50.810 *********** 2025-06-22 19:57:12.159321 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159334 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159345 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159356 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159368 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159384 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159396 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159407 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159430 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159441 | orchestrator | 2025-06-22 19:57:12.159452 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-22 19:57:12.159464 | orchestrator | Sunday 22 June 2025 19:56:39 +0000 (0:00:01.373) 0:01:52.183 *********** 2025-06-22 19:57:12.159475 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159492 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159504 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159515 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159553 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159587 | orchestrator | 2025-06-22 19:57:12.159598 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-22 19:57:12.159609 | orchestrator | Sunday 22 June 2025 19:56:43 +0000 (0:00:04.104) 0:01:56.288 *********** 2025-06-22 19:57:12.159627 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159644 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159655 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159685 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159750 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:57:12.159761 | orchestrator | 2025-06-22 19:57:12.159772 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:57:12.159783 | orchestrator | Sunday 22 June 2025 19:56:47 +0000 (0:00:03.206) 0:01:59.495 *********** 2025-06-22 19:57:12.159794 | orchestrator | 2025-06-22 19:57:12.159805 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:57:12.159816 | orchestrator | Sunday 22 June 2025 19:56:47 +0000 (0:00:00.082) 0:01:59.577 *********** 2025-06-22 19:57:12.159827 | orchestrator | 2025-06-22 19:57:12.159844 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:57:12.159855 | orchestrator | Sunday 22 June 2025 19:56:47 +0000 (0:00:00.070) 0:01:59.647 *********** 2025-06-22 19:57:12.159866 | orchestrator | 2025-06-22 19:57:12.159877 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-22 19:57:12.159888 | orchestrator | Sunday 22 June 2025 19:56:47 +0000 (0:00:00.065) 0:01:59.713 *********** 2025-06-22 19:57:12.159899 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:57:12.159910 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:57:12.159921 | orchestrator | 2025-06-22 19:57:12.159937 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-22 19:57:12.159948 | orchestrator | Sunday 22 June 2025 19:56:53 +0000 (0:00:06.127) 0:02:05.840 *********** 2025-06-22 19:57:12.159959 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:57:12.159975 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:57:12.159991 | orchestrator | 2025-06-22 19:57:12.160003 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-22 19:57:12.160013 | orchestrator | Sunday 22 June 2025 19:56:59 +0000 (0:00:06.434) 0:02:12.274 *********** 2025-06-22 19:57:12.160024 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:57:12.160035 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:57:12.160046 | orchestrator | 2025-06-22 19:57:12.160057 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-22 19:57:12.160068 | orchestrator | Sunday 22 June 2025 19:57:06 +0000 (0:00:06.233) 0:02:18.508 *********** 2025-06-22 19:57:12.160078 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:57:12.160089 | orchestrator | 2025-06-22 19:57:12.160100 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-22 19:57:12.160111 | orchestrator | Sunday 22 June 2025 19:57:06 +0000 (0:00:00.149) 0:02:18.657 *********** 2025-06-22 19:57:12.160121 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.160132 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.160143 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.160154 | orchestrator | 2025-06-22 19:57:12.160165 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-22 19:57:12.160175 | orchestrator | Sunday 22 June 2025 19:57:07 +0000 (0:00:01.065) 0:02:19.722 *********** 2025-06-22 19:57:12.160186 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.160197 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.160207 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:57:12.160218 | orchestrator | 2025-06-22 19:57:12.160229 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-22 19:57:12.160240 | orchestrator | Sunday 22 June 2025 19:57:08 +0000 (0:00:00.784) 0:02:20.507 *********** 2025-06-22 19:57:12.160250 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.160261 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.160272 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.160282 | orchestrator | 2025-06-22 19:57:12.160293 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-22 19:57:12.160322 | orchestrator | Sunday 22 June 2025 19:57:08 +0000 (0:00:00.805) 0:02:21.313 *********** 2025-06-22 19:57:12.160333 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:57:12.160344 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:57:12.160355 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:57:12.160365 | orchestrator | 2025-06-22 19:57:12.160376 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-22 19:57:12.160387 | orchestrator | Sunday 22 June 2025 19:57:09 +0000 (0:00:00.702) 0:02:22.015 *********** 2025-06-22 19:57:12.160398 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.160408 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.160419 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.160430 | orchestrator | 2025-06-22 19:57:12.160441 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-22 19:57:12.160452 | orchestrator | Sunday 22 June 2025 19:57:10 +0000 (0:00:00.959) 0:02:22.974 *********** 2025-06-22 19:57:12.160469 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:57:12.160480 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:57:12.160490 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:57:12.160501 | orchestrator | 2025-06-22 19:57:12.160512 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:57:12.160523 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-22 19:57:12.160539 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-22 19:57:12.160551 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-22 19:57:12.160562 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:57:12.160573 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:57:12.160584 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:57:12.160595 | orchestrator | 2025-06-22 19:57:12.160606 | orchestrator | 2025-06-22 19:57:12.160617 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:57:12.160627 | orchestrator | Sunday 22 June 2025 19:57:11 +0000 (0:00:00.877) 0:02:23.852 *********** 2025-06-22 19:57:12.160638 | orchestrator | =============================================================================== 2025-06-22 19:57:12.160649 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 28.75s 2025-06-22 19:57:12.160660 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.71s 2025-06-22 19:57:12.160670 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.10s 2025-06-22 19:57:12.160681 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.64s 2025-06-22 19:57:12.160692 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.29s 2025-06-22 19:57:12.160703 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.10s 2025-06-22 19:57:12.160713 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.91s 2025-06-22 19:57:12.160730 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.21s 2025-06-22 19:57:12.160741 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.91s 2025-06-22 19:57:12.160752 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.46s 2025-06-22 19:57:12.160762 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.17s 2025-06-22 19:57:12.160773 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.11s 2025-06-22 19:57:12.160784 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.87s 2025-06-22 19:57:12.160794 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.72s 2025-06-22 19:57:12.160805 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.68s 2025-06-22 19:57:12.160816 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.51s 2025-06-22 19:57:12.160827 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.50s 2025-06-22 19:57:12.160838 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.40s 2025-06-22 19:57:12.160848 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.37s 2025-06-22 19:57:12.160859 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.31s 2025-06-22 19:57:12.160870 | orchestrator | 2025-06-22 19:57:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:15.188741 | orchestrator | 2025-06-22 19:57:15 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:15.189683 | orchestrator | 2025-06-22 19:57:15 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:15.189715 | orchestrator | 2025-06-22 19:57:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:18.243116 | orchestrator | 2025-06-22 19:57:18 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:18.244621 | orchestrator | 2025-06-22 19:57:18 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:18.244653 | orchestrator | 2025-06-22 19:57:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:21.285519 | orchestrator | 2025-06-22 19:57:21 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:21.285780 | orchestrator | 2025-06-22 19:57:21 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:21.285998 | orchestrator | 2025-06-22 19:57:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:24.326263 | orchestrator | 2025-06-22 19:57:24 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:24.329217 | orchestrator | 2025-06-22 19:57:24 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:24.329347 | orchestrator | 2025-06-22 19:57:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:27.378690 | orchestrator | 2025-06-22 19:57:27 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:27.380481 | orchestrator | 2025-06-22 19:57:27 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:27.380787 | orchestrator | 2025-06-22 19:57:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:30.427166 | orchestrator | 2025-06-22 19:57:30 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:30.428721 | orchestrator | 2025-06-22 19:57:30 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:30.428754 | orchestrator | 2025-06-22 19:57:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:33.470914 | orchestrator | 2025-06-22 19:57:33 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:33.472975 | orchestrator | 2025-06-22 19:57:33 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:33.473009 | orchestrator | 2025-06-22 19:57:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:36.521620 | orchestrator | 2025-06-22 19:57:36 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:36.523457 | orchestrator | 2025-06-22 19:57:36 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:36.523509 | orchestrator | 2025-06-22 19:57:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:39.567411 | orchestrator | 2025-06-22 19:57:39 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:39.569477 | orchestrator | 2025-06-22 19:57:39 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:39.569853 | orchestrator | 2025-06-22 19:57:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:42.611049 | orchestrator | 2025-06-22 19:57:42 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:42.611161 | orchestrator | 2025-06-22 19:57:42 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:42.611222 | orchestrator | 2025-06-22 19:57:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:45.651155 | orchestrator | 2025-06-22 19:57:45 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:45.651471 | orchestrator | 2025-06-22 19:57:45 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:45.651495 | orchestrator | 2025-06-22 19:57:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:48.683149 | orchestrator | 2025-06-22 19:57:48 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:48.683241 | orchestrator | 2025-06-22 19:57:48 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:48.683254 | orchestrator | 2025-06-22 19:57:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:51.722689 | orchestrator | 2025-06-22 19:57:51 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:51.723212 | orchestrator | 2025-06-22 19:57:51 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:51.723240 | orchestrator | 2025-06-22 19:57:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:54.772559 | orchestrator | 2025-06-22 19:57:54 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:54.773110 | orchestrator | 2025-06-22 19:57:54 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:54.774132 | orchestrator | 2025-06-22 19:57:54 | INFO  | Task 2215dca0-6dc0-4129-993d-123759bf91fc is in state STARTED 2025-06-22 19:57:54.774164 | orchestrator | 2025-06-22 19:57:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:57.813646 | orchestrator | 2025-06-22 19:57:57 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:57:57.813847 | orchestrator | 2025-06-22 19:57:57 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:57:57.814789 | orchestrator | 2025-06-22 19:57:57 | INFO  | Task 2215dca0-6dc0-4129-993d-123759bf91fc is in state STARTED 2025-06-22 19:57:57.814814 | orchestrator | 2025-06-22 19:57:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:00.854359 | orchestrator | 2025-06-22 19:58:00 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:00.854695 | orchestrator | 2025-06-22 19:58:00 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:00.855225 | orchestrator | 2025-06-22 19:58:00 | INFO  | Task 2215dca0-6dc0-4129-993d-123759bf91fc is in state STARTED 2025-06-22 19:58:00.855366 | orchestrator | 2025-06-22 19:58:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:03.887342 | orchestrator | 2025-06-22 19:58:03 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:03.887757 | orchestrator | 2025-06-22 19:58:03 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:03.888615 | orchestrator | 2025-06-22 19:58:03 | INFO  | Task 2215dca0-6dc0-4129-993d-123759bf91fc is in state STARTED 2025-06-22 19:58:03.888634 | orchestrator | 2025-06-22 19:58:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:06.933356 | orchestrator | 2025-06-22 19:58:06 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:06.933450 | orchestrator | 2025-06-22 19:58:06 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:06.934055 | orchestrator | 2025-06-22 19:58:06 | INFO  | Task 2215dca0-6dc0-4129-993d-123759bf91fc is in state STARTED 2025-06-22 19:58:06.934108 | orchestrator | 2025-06-22 19:58:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:09.960438 | orchestrator | 2025-06-22 19:58:09 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:09.960997 | orchestrator | 2025-06-22 19:58:09 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:09.961519 | orchestrator | 2025-06-22 19:58:09 | INFO  | Task 2215dca0-6dc0-4129-993d-123759bf91fc is in state SUCCESS 2025-06-22 19:58:09.961712 | orchestrator | 2025-06-22 19:58:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:13.008978 | orchestrator | 2025-06-22 19:58:13 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:13.012354 | orchestrator | 2025-06-22 19:58:13 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:13.012399 | orchestrator | 2025-06-22 19:58:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:16.062252 | orchestrator | 2025-06-22 19:58:16 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:16.062368 | orchestrator | 2025-06-22 19:58:16 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:16.062377 | orchestrator | 2025-06-22 19:58:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:19.107779 | orchestrator | 2025-06-22 19:58:19 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:19.109642 | orchestrator | 2025-06-22 19:58:19 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:19.109909 | orchestrator | 2025-06-22 19:58:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:22.158834 | orchestrator | 2025-06-22 19:58:22 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:22.160965 | orchestrator | 2025-06-22 19:58:22 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:22.161006 | orchestrator | 2025-06-22 19:58:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:25.199434 | orchestrator | 2025-06-22 19:58:25 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:25.201970 | orchestrator | 2025-06-22 19:58:25 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:25.203894 | orchestrator | 2025-06-22 19:58:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:28.251109 | orchestrator | 2025-06-22 19:58:28 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:28.255533 | orchestrator | 2025-06-22 19:58:28 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:28.255561 | orchestrator | 2025-06-22 19:58:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:31.299400 | orchestrator | 2025-06-22 19:58:31 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:31.300616 | orchestrator | 2025-06-22 19:58:31 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:31.300747 | orchestrator | 2025-06-22 19:58:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:34.350610 | orchestrator | 2025-06-22 19:58:34 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:34.351078 | orchestrator | 2025-06-22 19:58:34 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:34.351115 | orchestrator | 2025-06-22 19:58:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:37.402668 | orchestrator | 2025-06-22 19:58:37 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:37.404239 | orchestrator | 2025-06-22 19:58:37 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:37.404296 | orchestrator | 2025-06-22 19:58:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:40.440283 | orchestrator | 2025-06-22 19:58:40 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:40.441087 | orchestrator | 2025-06-22 19:58:40 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:40.441215 | orchestrator | 2025-06-22 19:58:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:43.477725 | orchestrator | 2025-06-22 19:58:43 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:43.479531 | orchestrator | 2025-06-22 19:58:43 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:43.479565 | orchestrator | 2025-06-22 19:58:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:46.527643 | orchestrator | 2025-06-22 19:58:46 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:46.529778 | orchestrator | 2025-06-22 19:58:46 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:46.529810 | orchestrator | 2025-06-22 19:58:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:49.576029 | orchestrator | 2025-06-22 19:58:49 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:49.576123 | orchestrator | 2025-06-22 19:58:49 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:49.576137 | orchestrator | 2025-06-22 19:58:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:52.624385 | orchestrator | 2025-06-22 19:58:52 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:52.626174 | orchestrator | 2025-06-22 19:58:52 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:52.626212 | orchestrator | 2025-06-22 19:58:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:55.669933 | orchestrator | 2025-06-22 19:58:55 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:55.671534 | orchestrator | 2025-06-22 19:58:55 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:55.671688 | orchestrator | 2025-06-22 19:58:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:58.723510 | orchestrator | 2025-06-22 19:58:58 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:58:58.725696 | orchestrator | 2025-06-22 19:58:58 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:58:58.725962 | orchestrator | 2025-06-22 19:58:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:01.776353 | orchestrator | 2025-06-22 19:59:01 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:01.778422 | orchestrator | 2025-06-22 19:59:01 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:01.778466 | orchestrator | 2025-06-22 19:59:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:04.826114 | orchestrator | 2025-06-22 19:59:04 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:04.826731 | orchestrator | 2025-06-22 19:59:04 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:04.826888 | orchestrator | 2025-06-22 19:59:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:07.873264 | orchestrator | 2025-06-22 19:59:07 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:07.873840 | orchestrator | 2025-06-22 19:59:07 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:07.873871 | orchestrator | 2025-06-22 19:59:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:10.916254 | orchestrator | 2025-06-22 19:59:10 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:10.917320 | orchestrator | 2025-06-22 19:59:10 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:10.917366 | orchestrator | 2025-06-22 19:59:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:13.959356 | orchestrator | 2025-06-22 19:59:13 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:13.960364 | orchestrator | 2025-06-22 19:59:13 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:13.960393 | orchestrator | 2025-06-22 19:59:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:16.996716 | orchestrator | 2025-06-22 19:59:16 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:16.997487 | orchestrator | 2025-06-22 19:59:16 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:16.997539 | orchestrator | 2025-06-22 19:59:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:20.042587 | orchestrator | 2025-06-22 19:59:20 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:20.045544 | orchestrator | 2025-06-22 19:59:20 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:20.045741 | orchestrator | 2025-06-22 19:59:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:23.088014 | orchestrator | 2025-06-22 19:59:23 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:23.089512 | orchestrator | 2025-06-22 19:59:23 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:23.090284 | orchestrator | 2025-06-22 19:59:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:26.138188 | orchestrator | 2025-06-22 19:59:26 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:26.139963 | orchestrator | 2025-06-22 19:59:26 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:26.140303 | orchestrator | 2025-06-22 19:59:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:29.180790 | orchestrator | 2025-06-22 19:59:29 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:29.183045 | orchestrator | 2025-06-22 19:59:29 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:29.183416 | orchestrator | 2025-06-22 19:59:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:32.230143 | orchestrator | 2025-06-22 19:59:32 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:32.232429 | orchestrator | 2025-06-22 19:59:32 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:32.232524 | orchestrator | 2025-06-22 19:59:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:35.273346 | orchestrator | 2025-06-22 19:59:35 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:35.273637 | orchestrator | 2025-06-22 19:59:35 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:35.274156 | orchestrator | 2025-06-22 19:59:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:38.328919 | orchestrator | 2025-06-22 19:59:38 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:38.330260 | orchestrator | 2025-06-22 19:59:38 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:38.331164 | orchestrator | 2025-06-22 19:59:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:41.378363 | orchestrator | 2025-06-22 19:59:41 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:41.379944 | orchestrator | 2025-06-22 19:59:41 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:41.380324 | orchestrator | 2025-06-22 19:59:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:44.435644 | orchestrator | 2025-06-22 19:59:44 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:44.438565 | orchestrator | 2025-06-22 19:59:44 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:44.438815 | orchestrator | 2025-06-22 19:59:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:47.480567 | orchestrator | 2025-06-22 19:59:47 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:47.482320 | orchestrator | 2025-06-22 19:59:47 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state STARTED 2025-06-22 19:59:47.482371 | orchestrator | 2025-06-22 19:59:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:50.533004 | orchestrator | 2025-06-22 19:59:50 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:50.540663 | orchestrator | 2025-06-22 19:59:50 | INFO  | Task 9749b3dc-691a-4c73-bf8c-d702d5e97d43 is in state SUCCESS 2025-06-22 19:59:50.543070 | orchestrator | 2025-06-22 19:59:50.543146 | orchestrator | None 2025-06-22 19:59:50.543162 | orchestrator | 2025-06-22 19:59:50.543174 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:59:50.543263 | orchestrator | 2025-06-22 19:59:50.543389 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:59:50.543403 | orchestrator | Sunday 22 June 2025 19:53:39 +0000 (0:00:00.337) 0:00:00.337 *********** 2025-06-22 19:59:50.543474 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.543488 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.543499 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.543510 | orchestrator | 2025-06-22 19:59:50.543522 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:59:50.543533 | orchestrator | Sunday 22 June 2025 19:53:39 +0000 (0:00:00.428) 0:00:00.765 *********** 2025-06-22 19:59:50.543545 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-22 19:59:50.543580 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-22 19:59:50.543593 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-22 19:59:50.543603 | orchestrator | 2025-06-22 19:59:50.543614 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-22 19:59:50.543652 | orchestrator | 2025-06-22 19:59:50.543663 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-22 19:59:50.543673 | orchestrator | Sunday 22 June 2025 19:53:40 +0000 (0:00:00.419) 0:00:01.184 *********** 2025-06-22 19:59:50.543684 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.543748 | orchestrator | 2025-06-22 19:59:50.543782 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-22 19:59:50.543794 | orchestrator | Sunday 22 June 2025 19:53:41 +0000 (0:00:00.906) 0:00:02.091 *********** 2025-06-22 19:59:50.543804 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.543815 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.543826 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.543837 | orchestrator | 2025-06-22 19:59:50.543848 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-22 19:59:50.543858 | orchestrator | Sunday 22 June 2025 19:53:42 +0000 (0:00:01.427) 0:00:03.518 *********** 2025-06-22 19:59:50.543869 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.543880 | orchestrator | 2025-06-22 19:59:50.543891 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-22 19:59:50.543902 | orchestrator | Sunday 22 June 2025 19:53:43 +0000 (0:00:00.751) 0:00:04.270 *********** 2025-06-22 19:59:50.543912 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.543923 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.543964 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.543975 | orchestrator | 2025-06-22 19:59:50.543986 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-22 19:59:50.543997 | orchestrator | Sunday 22 June 2025 19:53:44 +0000 (0:00:01.286) 0:00:05.556 *********** 2025-06-22 19:59:50.544008 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:59:50.544019 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:59:50.544031 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:59:50.544042 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:59:50.544052 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:59:50.544063 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:59:50.544074 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-22 19:59:50.544086 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-22 19:59:50.544097 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-22 19:59:50.544108 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-22 19:59:50.544118 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-22 19:59:50.544129 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-22 19:59:50.544139 | orchestrator | 2025-06-22 19:59:50.544150 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-22 19:59:50.544161 | orchestrator | Sunday 22 June 2025 19:53:47 +0000 (0:00:03.143) 0:00:08.700 *********** 2025-06-22 19:59:50.544172 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-22 19:59:50.544183 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-22 19:59:50.544221 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-22 19:59:50.544233 | orchestrator | 2025-06-22 19:59:50.544244 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-22 19:59:50.544287 | orchestrator | Sunday 22 June 2025 19:53:48 +0000 (0:00:01.126) 0:00:09.827 *********** 2025-06-22 19:59:50.544298 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-22 19:59:50.544309 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-22 19:59:50.544320 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-22 19:59:50.544331 | orchestrator | 2025-06-22 19:59:50.544351 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-22 19:59:50.544611 | orchestrator | Sunday 22 June 2025 19:53:50 +0000 (0:00:01.725) 0:00:11.552 *********** 2025-06-22 19:59:50.544629 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-22 19:59:50.544641 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.544667 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-22 19:59:50.544678 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.544689 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-22 19:59:50.544700 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.544711 | orchestrator | 2025-06-22 19:59:50.544722 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-22 19:59:50.544732 | orchestrator | Sunday 22 June 2025 19:53:51 +0000 (0:00:00.897) 0:00:12.449 *********** 2025-06-22 19:59:50.544747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.544763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.544775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.544786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.544799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.544832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.544846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:50.544858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:50.544869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:50.544880 | orchestrator | 2025-06-22 19:59:50.544891 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-22 19:59:50.544902 | orchestrator | Sunday 22 June 2025 19:53:53 +0000 (0:00:01.911) 0:00:14.361 *********** 2025-06-22 19:59:50.544913 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.544924 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.544934 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.544945 | orchestrator | 2025-06-22 19:59:50.544956 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-22 19:59:50.544967 | orchestrator | Sunday 22 June 2025 19:53:54 +0000 (0:00:01.181) 0:00:15.543 *********** 2025-06-22 19:59:50.544978 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-22 19:59:50.545017 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-22 19:59:50.545030 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-22 19:59:50.545073 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-22 19:59:50.545085 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-22 19:59:50.545096 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-22 19:59:50.545106 | orchestrator | 2025-06-22 19:59:50.545117 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-22 19:59:50.545233 | orchestrator | Sunday 22 June 2025 19:53:56 +0000 (0:00:02.329) 0:00:17.872 *********** 2025-06-22 19:59:50.545275 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.545286 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.545297 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.545307 | orchestrator | 2025-06-22 19:59:50.545383 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-22 19:59:50.545404 | orchestrator | Sunday 22 June 2025 19:53:58 +0000 (0:00:01.497) 0:00:19.370 *********** 2025-06-22 19:59:50.545415 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.545426 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.545437 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.545447 | orchestrator | 2025-06-22 19:59:50.545458 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-22 19:59:50.545469 | orchestrator | Sunday 22 June 2025 19:54:00 +0000 (0:00:01.652) 0:00:21.023 *********** 2025-06-22 19:59:50.545481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.545508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.545520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.545533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__311f1b455624d7c300bd92ae6c37e9481c426e1e', '__omit_place_holder__311f1b455624d7c300bd92ae6c37e9481c426e1e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:59:50.545586 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.545598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.545610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.545630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.545646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__311f1b455624d7c300bd92ae6c37e9481c426e1e', '__omit_place_holder__311f1b455624d7c300bd92ae6c37e9481c426e1e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:59:50.545658 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.545677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.545689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.545701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.545712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__311f1b455624d7c300bd92ae6c37e9481c426e1e', '__omit_place_holder__311f1b455624d7c300bd92ae6c37e9481c426e1e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:59:50.545767 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.545780 | orchestrator | 2025-06-22 19:59:50.545791 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-22 19:59:50.545827 | orchestrator | Sunday 22 June 2025 19:54:01 +0000 (0:00:01.361) 0:00:22.384 *********** 2025-06-22 19:59:50.545840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.545851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.545926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.545941 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.545953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.545965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__311f1b455624d7c300bd92ae6c37e9481c426e1e', '__omit_place_holder__311f1b455624d7c300bd92ae6c37e9481c426e1e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:59:50.545983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.545994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.546010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__311f1b455624d7c300bd92ae6c37e9481c426e1e', '__omit_place_holder__311f1b455624d7c300bd92ae6c37e9481c426e1e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:59:50.546083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.546128 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.546141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__311f1b455624d7c300bd92ae6c37e9481c426e1e', '__omit_place_holder__311f1b455624d7c300bd92ae6c37e9481c426e1e'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:59:50.546169 | orchestrator | 2025-06-22 19:59:50.546181 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-22 19:59:50.546474 | orchestrator | Sunday 22 June 2025 19:54:05 +0000 (0:00:04.246) 0:00:26.631 *********** 2025-06-22 19:59:50.546561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.546575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.546588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.546610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.546723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.546749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.546771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:50.546782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:50.546794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:50.546806 | orchestrator | 2025-06-22 19:59:50.546817 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-22 19:59:50.546828 | orchestrator | Sunday 22 June 2025 19:54:09 +0000 (0:00:03.535) 0:00:30.167 *********** 2025-06-22 19:59:50.546840 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-22 19:59:50.546851 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-22 19:59:50.546862 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-22 19:59:50.546873 | orchestrator | 2025-06-22 19:59:50.546884 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-22 19:59:50.546895 | orchestrator | Sunday 22 June 2025 19:54:11 +0000 (0:00:02.335) 0:00:32.502 *********** 2025-06-22 19:59:50.546911 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-22 19:59:50.546922 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-22 19:59:50.548394 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-22 19:59:50.548468 | orchestrator | 2025-06-22 19:59:50.548478 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-22 19:59:50.548486 | orchestrator | Sunday 22 June 2025 19:54:15 +0000 (0:00:04.221) 0:00:36.724 *********** 2025-06-22 19:59:50.548493 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.548500 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.548507 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.548514 | orchestrator | 2025-06-22 19:59:50.548521 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-22 19:59:50.548528 | orchestrator | Sunday 22 June 2025 19:54:16 +0000 (0:00:00.670) 0:00:37.394 *********** 2025-06-22 19:59:50.548535 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-22 19:59:50.548543 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-22 19:59:50.548566 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-22 19:59:50.548573 | orchestrator | 2025-06-22 19:59:50.548580 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-22 19:59:50.548586 | orchestrator | Sunday 22 June 2025 19:54:20 +0000 (0:00:03.905) 0:00:41.300 *********** 2025-06-22 19:59:50.548594 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-22 19:59:50.548601 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-22 19:59:50.548607 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-22 19:59:50.548614 | orchestrator | 2025-06-22 19:59:50.548621 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-22 19:59:50.548628 | orchestrator | Sunday 22 June 2025 19:54:22 +0000 (0:00:01.644) 0:00:42.944 *********** 2025-06-22 19:59:50.548635 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-22 19:59:50.548642 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-22 19:59:50.548649 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-22 19:59:50.548655 | orchestrator | 2025-06-22 19:59:50.548662 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-22 19:59:50.548669 | orchestrator | Sunday 22 June 2025 19:54:23 +0000 (0:00:01.580) 0:00:44.525 *********** 2025-06-22 19:59:50.548675 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-22 19:59:50.548682 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-22 19:59:50.548689 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-22 19:59:50.548696 | orchestrator | 2025-06-22 19:59:50.548702 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-22 19:59:50.548709 | orchestrator | Sunday 22 June 2025 19:54:25 +0000 (0:00:01.452) 0:00:45.977 *********** 2025-06-22 19:59:50.548716 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.548723 | orchestrator | 2025-06-22 19:59:50.548730 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-22 19:59:50.548736 | orchestrator | Sunday 22 June 2025 19:54:25 +0000 (0:00:00.802) 0:00:46.780 *********** 2025-06-22 19:59:50.548745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.548755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.548779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.548791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.548799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.548806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.548814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:50.548821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:50.548828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:50.548839 | orchestrator | 2025-06-22 19:59:50.548846 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-22 19:59:50.548856 | orchestrator | Sunday 22 June 2025 19:54:28 +0000 (0:00:02.876) 0:00:49.657 *********** 2025-06-22 19:59:50.548868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.548876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.548883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.548890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.548897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.548904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.548911 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.548922 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.548932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.548944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.548951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.548958 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.548966 | orchestrator | 2025-06-22 19:59:50.548974 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-22 19:59:50.548981 | orchestrator | Sunday 22 June 2025 19:54:29 +0000 (0:00:00.672) 0:00:50.329 *********** 2025-06-22 19:59:50.548989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.548997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549005 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549016 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.549024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549057 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.549064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549088 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.549095 | orchestrator | 2025-06-22 19:59:50.549103 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-22 19:59:50.549115 | orchestrator | Sunday 22 June 2025 19:54:30 +0000 (0:00:01.045) 0:00:51.375 *********** 2025-06-22 19:59:50.549123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549154 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.549161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549182 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.549243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549284 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.549291 | orchestrator | 2025-06-22 19:59:50.549297 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-22 19:59:50.549304 | orchestrator | Sunday 22 June 2025 19:54:31 +0000 (0:00:00.735) 0:00:52.110 *********** 2025-06-22 19:59:50.549311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549362 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.549369 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.549381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549403 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.549410 | orchestrator | 2025-06-22 19:59:50.549416 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-22 19:59:50.549423 | orchestrator | Sunday 22 June 2025 19:54:31 +0000 (0:00:00.705) 0:00:52.816 *********** 2025-06-22 19:59:50.549430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549456 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.549477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549499 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.549506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549532 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.549539 | orchestrator | 2025-06-22 19:59:50.549545 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-22 19:59:50.549552 | orchestrator | Sunday 22 June 2025 19:54:32 +0000 (0:00:00.950) 0:00:53.766 *********** 2025-06-22 19:59:50.549562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549589 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.549596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549629 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.549642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549657 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.549664 | orchestrator | 2025-06-22 19:59:50.549671 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-22 19:59:50.549678 | orchestrator | Sunday 22 June 2025 19:54:33 +0000 (0:00:00.592) 0:00:54.359 *********** 2025-06-22 19:59:50.549685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549711 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.549718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549747 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.549754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549779 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.549786 | orchestrator | 2025-06-22 19:59:50.549793 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-22 19:59:50.549800 | orchestrator | Sunday 22 June 2025 19:54:34 +0000 (0:00:00.678) 0:00:55.037 *********** 2025-06-22 19:59:50.549807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549814 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549832 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.549844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549870 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.549877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:50.549884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:50.549892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:50.549899 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.549905 | orchestrator | 2025-06-22 19:59:50.549912 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-22 19:59:50.549919 | orchestrator | Sunday 22 June 2025 19:54:36 +0000 (0:00:02.704) 0:00:57.742 *********** 2025-06-22 19:59:50.549931 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-22 19:59:50.549938 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-22 19:59:50.549949 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-22 19:59:50.549960 | orchestrator | 2025-06-22 19:59:50.549967 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-22 19:59:50.549974 | orchestrator | Sunday 22 June 2025 19:54:38 +0000 (0:00:01.435) 0:00:59.178 *********** 2025-06-22 19:59:50.549981 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-22 19:59:50.549988 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-22 19:59:50.549994 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-22 19:59:50.550001 | orchestrator | 2025-06-22 19:59:50.550008 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-22 19:59:50.553848 | orchestrator | Sunday 22 June 2025 19:54:39 +0000 (0:00:01.526) 0:01:00.705 *********** 2025-06-22 19:59:50.553893 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 19:59:50.553902 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 19:59:50.553909 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 19:59:50.553915 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 19:59:50.553922 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.553929 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 19:59:50.553936 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.553943 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 19:59:50.553949 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.553956 | orchestrator | 2025-06-22 19:59:50.553963 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-22 19:59:50.553970 | orchestrator | Sunday 22 June 2025 19:54:41 +0000 (0:00:01.532) 0:01:02.238 *********** 2025-06-22 19:59:50.553978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.553987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.553994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:50.554058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.554068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.554075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:50.554083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:50.554090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:50.554097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:50.554104 | orchestrator | 2025-06-22 19:59:50.554111 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-22 19:59:50.554118 | orchestrator | Sunday 22 June 2025 19:54:44 +0000 (0:00:03.245) 0:01:05.483 *********** 2025-06-22 19:59:50.554125 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.554131 | orchestrator | 2025-06-22 19:59:50.554142 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-22 19:59:50.554149 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:00.690) 0:01:06.174 *********** 2025-06-22 19:59:50.554166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-22 19:59:50.554174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.554182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-22 19:59:50.554221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.554231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-22 19:59:50.554254 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.554268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554285 | orchestrator | 2025-06-22 19:59:50.554292 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-22 19:59:50.554299 | orchestrator | Sunday 22 June 2025 19:54:50 +0000 (0:00:04.750) 0:01:10.924 *********** 2025-06-22 19:59:50.554307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-22 19:59:50.554319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.554326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554340 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.554383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-22 19:59:50.554396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.554408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554424 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.554437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-22 19:59:50.554444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.554451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554469 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.554476 | orchestrator | 2025-06-22 19:59:50.554484 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-22 19:59:50.554492 | orchestrator | Sunday 22 June 2025 19:54:50 +0000 (0:00:00.951) 0:01:11.876 *********** 2025-06-22 19:59:50.554500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:59:50.554510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:59:50.554518 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.554526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:59:50.554534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:59:50.554541 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.554548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:59:50.554559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:59:50.554567 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.554574 | orchestrator | 2025-06-22 19:59:50.554586 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-22 19:59:50.554594 | orchestrator | Sunday 22 June 2025 19:54:52 +0000 (0:00:01.250) 0:01:13.127 *********** 2025-06-22 19:59:50.554601 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.554609 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.554617 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.554624 | orchestrator | 2025-06-22 19:59:50.554632 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-22 19:59:50.554639 | orchestrator | Sunday 22 June 2025 19:54:53 +0000 (0:00:01.711) 0:01:14.838 *********** 2025-06-22 19:59:50.554645 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.554652 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.554659 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.554665 | orchestrator | 2025-06-22 19:59:50.554672 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-22 19:59:50.554679 | orchestrator | Sunday 22 June 2025 19:54:56 +0000 (0:00:02.810) 0:01:17.649 *********** 2025-06-22 19:59:50.554685 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.554692 | orchestrator | 2025-06-22 19:59:50.554698 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-22 19:59:50.554705 | orchestrator | Sunday 22 June 2025 19:54:57 +0000 (0:00:01.064) 0:01:18.713 *********** 2025-06-22 19:59:50.554713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.554723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.554753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.554779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554793 | orchestrator | 2025-06-22 19:59:50.554800 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-22 19:59:50.554807 | orchestrator | Sunday 22 June 2025 19:55:02 +0000 (0:00:04.992) 0:01:23.706 *********** 2025-06-22 19:59:50.554821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.554828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554847 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.554854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.554861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554875 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.554888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.554900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.554915 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.554921 | orchestrator | 2025-06-22 19:59:50.554928 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-22 19:59:50.554935 | orchestrator | Sunday 22 June 2025 19:55:03 +0000 (0:00:00.744) 0:01:24.450 *********** 2025-06-22 19:59:50.554942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:59:50.554950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:59:50.554957 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.554964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:59:50.554971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:59:50.554977 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.554984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:59:50.554991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:59:50.554998 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.555004 | orchestrator | 2025-06-22 19:59:50.555011 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-22 19:59:50.555018 | orchestrator | Sunday 22 June 2025 19:55:04 +0000 (0:00:01.041) 0:01:25.492 *********** 2025-06-22 19:59:50.555025 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.555031 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.555038 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.555044 | orchestrator | 2025-06-22 19:59:50.555054 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-22 19:59:50.555061 | orchestrator | Sunday 22 June 2025 19:55:06 +0000 (0:00:01.750) 0:01:27.242 *********** 2025-06-22 19:59:50.555067 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.555078 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.555088 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.555099 | orchestrator | 2025-06-22 19:59:50.555115 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-22 19:59:50.555127 | orchestrator | Sunday 22 June 2025 19:55:08 +0000 (0:00:01.935) 0:01:29.177 *********** 2025-06-22 19:59:50.555138 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.555150 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.555161 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.555172 | orchestrator | 2025-06-22 19:59:50.555182 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-22 19:59:50.555208 | orchestrator | Sunday 22 June 2025 19:55:08 +0000 (0:00:00.495) 0:01:29.672 *********** 2025-06-22 19:59:50.555218 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.555229 | orchestrator | 2025-06-22 19:59:50.555239 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-22 19:59:50.555250 | orchestrator | Sunday 22 June 2025 19:55:09 +0000 (0:00:00.586) 0:01:30.259 *********** 2025-06-22 19:59:50.555262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-22 19:59:50.555274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-22 19:59:50.555286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-22 19:59:50.555298 | orchestrator | 2025-06-22 19:59:50.555309 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-22 19:59:50.555317 | orchestrator | Sunday 22 June 2025 19:55:12 +0000 (0:00:03.112) 0:01:33.371 *********** 2025-06-22 19:59:50.555334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-22 19:59:50.555348 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.555355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-22 19:59:50.555362 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.555370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-22 19:59:50.555377 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.555383 | orchestrator | 2025-06-22 19:59:50.555390 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-22 19:59:50.555397 | orchestrator | Sunday 22 June 2025 19:55:14 +0000 (0:00:01.758) 0:01:35.130 *********** 2025-06-22 19:59:50.555404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:59:50.555413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:59:50.555421 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.555428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:59:50.555440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:59:50.555447 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.555460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:59:50.555467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:59:50.555474 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.555481 | orchestrator | 2025-06-22 19:59:50.555488 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-22 19:59:50.555495 | orchestrator | Sunday 22 June 2025 19:55:16 +0000 (0:00:01.863) 0:01:36.994 *********** 2025-06-22 19:59:50.555501 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.555508 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.555515 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.555521 | orchestrator | 2025-06-22 19:59:50.555528 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-22 19:59:50.555535 | orchestrator | Sunday 22 June 2025 19:55:16 +0000 (0:00:00.372) 0:01:37.366 *********** 2025-06-22 19:59:50.555541 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.555548 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.555555 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.555561 | orchestrator | 2025-06-22 19:59:50.555568 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-22 19:59:50.555575 | orchestrator | Sunday 22 June 2025 19:55:17 +0000 (0:00:01.121) 0:01:38.488 *********** 2025-06-22 19:59:50.555581 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.555588 | orchestrator | 2025-06-22 19:59:50.555595 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-22 19:59:50.555601 | orchestrator | Sunday 22 June 2025 19:55:18 +0000 (0:00:00.795) 0:01:39.283 *********** 2025-06-22 19:59:50.555608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.555620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.555657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.555670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555724 | orchestrator | 2025-06-22 19:59:50.555731 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-22 19:59:50.555738 | orchestrator | Sunday 22 June 2025 19:55:21 +0000 (0:00:03.086) 0:01:42.369 *********** 2025-06-22 19:59:50.555745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.555755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555783 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.555790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.555803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555833 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.555840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.555848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.555875 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.555882 | orchestrator | 2025-06-22 19:59:50.555889 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-22 19:59:50.555897 | orchestrator | Sunday 22 June 2025 19:55:22 +0000 (0:00:00.851) 0:01:43.220 *********** 2025-06-22 19:59:50.555907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:59:50.556026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:59:50.556038 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.556046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:59:50.556053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:59:50.556061 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.556068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:59:50.556076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:59:50.556083 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.556096 | orchestrator | 2025-06-22 19:59:50.556103 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-22 19:59:50.556111 | orchestrator | Sunday 22 June 2025 19:55:23 +0000 (0:00:00.797) 0:01:44.018 *********** 2025-06-22 19:59:50.556118 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.556125 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.556132 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.556139 | orchestrator | 2025-06-22 19:59:50.556147 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-22 19:59:50.556154 | orchestrator | Sunday 22 June 2025 19:55:24 +0000 (0:00:01.247) 0:01:45.265 *********** 2025-06-22 19:59:50.556161 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.556168 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.556175 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.556183 | orchestrator | 2025-06-22 19:59:50.556236 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-22 19:59:50.556244 | orchestrator | Sunday 22 June 2025 19:55:26 +0000 (0:00:01.984) 0:01:47.250 *********** 2025-06-22 19:59:50.556251 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.556258 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.556266 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.556273 | orchestrator | 2025-06-22 19:59:50.556280 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-22 19:59:50.556288 | orchestrator | Sunday 22 June 2025 19:55:26 +0000 (0:00:00.293) 0:01:47.544 *********** 2025-06-22 19:59:50.556295 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.556302 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.556309 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.556316 | orchestrator | 2025-06-22 19:59:50.556323 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-22 19:59:50.556331 | orchestrator | Sunday 22 June 2025 19:55:27 +0000 (0:00:00.561) 0:01:48.106 *********** 2025-06-22 19:59:50.556338 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.556345 | orchestrator | 2025-06-22 19:59:50.556352 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-22 19:59:50.556360 | orchestrator | Sunday 22 June 2025 19:55:28 +0000 (0:00:01.027) 0:01:49.134 *********** 2025-06-22 19:59:50.556367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 19:59:50.556387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:59:50.556395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 19:59:50.556453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:59:50.556466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556497 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 19:59:50.556524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:59:50.556532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556575 | orchestrator | 2025-06-22 19:59:50.556582 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-22 19:59:50.556593 | orchestrator | Sunday 22 June 2025 19:55:32 +0000 (0:00:04.635) 0:01:53.769 *********** 2025-06-22 19:59:50.556605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 19:59:50.556613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:59:50.556620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556668 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.556676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 19:59:50.556684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:59:50.556692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 19:59:50.556726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:59:50.556742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556758 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.556766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.556815 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.556823 | orchestrator | 2025-06-22 19:59:50.556830 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-22 19:59:50.556838 | orchestrator | Sunday 22 June 2025 19:55:33 +0000 (0:00:00.820) 0:01:54.589 *********** 2025-06-22 19:59:50.556846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:59:50.556853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:59:50.556862 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.556870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:59:50.556877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:59:50.556884 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.556892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:59:50.556899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:59:50.556907 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.556914 | orchestrator | 2025-06-22 19:59:50.556925 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-22 19:59:50.556933 | orchestrator | Sunday 22 June 2025 19:55:34 +0000 (0:00:01.024) 0:01:55.614 *********** 2025-06-22 19:59:50.556940 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.556948 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.556955 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.556963 | orchestrator | 2025-06-22 19:59:50.556970 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-22 19:59:50.556978 | orchestrator | Sunday 22 June 2025 19:55:36 +0000 (0:00:01.642) 0:01:57.256 *********** 2025-06-22 19:59:50.556985 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.556993 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.557000 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.557008 | orchestrator | 2025-06-22 19:59:50.557015 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-22 19:59:50.557021 | orchestrator | Sunday 22 June 2025 19:55:38 +0000 (0:00:01.973) 0:01:59.230 *********** 2025-06-22 19:59:50.557028 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.557034 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.557041 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.557048 | orchestrator | 2025-06-22 19:59:50.557054 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-22 19:59:50.557061 | orchestrator | Sunday 22 June 2025 19:55:38 +0000 (0:00:00.278) 0:01:59.508 *********** 2025-06-22 19:59:50.557068 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.557074 | orchestrator | 2025-06-22 19:59:50.557081 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-22 19:59:50.557087 | orchestrator | Sunday 22 June 2025 19:55:39 +0000 (0:00:00.735) 0:02:00.244 *********** 2025-06-22 19:59:50.557116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 19:59:50.557126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:59:50.557145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 19:59:50.557154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:59:50.557173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 19:59:50.557181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:59:50.557207 | orchestrator | 2025-06-22 19:59:50.557215 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-22 19:59:50.557222 | orchestrator | Sunday 22 June 2025 19:55:43 +0000 (0:00:03.981) 0:02:04.226 *********** 2025-06-22 19:59:50.557236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 19:59:50.557245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:59:50.557256 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.557263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 19:59:50.557279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:59:50.557290 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.557297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 19:59:50.557312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:59:50.557320 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.557327 | orchestrator | 2025-06-22 19:59:50.557334 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-22 19:59:50.557341 | orchestrator | Sunday 22 June 2025 19:55:46 +0000 (0:00:02.813) 0:02:07.039 *********** 2025-06-22 19:59:50.557352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:59:50.557359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:59:50.557366 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.557373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:59:50.557380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:59:50.557387 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.557397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:59:50.557408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:59:50.557416 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.557422 | orchestrator | 2025-06-22 19:59:50.557429 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-22 19:59:50.557436 | orchestrator | Sunday 22 June 2025 19:55:49 +0000 (0:00:03.039) 0:02:10.079 *********** 2025-06-22 19:59:50.557443 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.557449 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.557456 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.557462 | orchestrator | 2025-06-22 19:59:50.557469 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-22 19:59:50.557476 | orchestrator | Sunday 22 June 2025 19:55:50 +0000 (0:00:01.689) 0:02:11.768 *********** 2025-06-22 19:59:50.557486 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.557493 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.557499 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.557506 | orchestrator | 2025-06-22 19:59:50.557513 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-22 19:59:50.557519 | orchestrator | Sunday 22 June 2025 19:55:52 +0000 (0:00:01.989) 0:02:13.758 *********** 2025-06-22 19:59:50.557526 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.557532 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.557539 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.557546 | orchestrator | 2025-06-22 19:59:50.557552 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-22 19:59:50.557559 | orchestrator | Sunday 22 June 2025 19:55:53 +0000 (0:00:00.317) 0:02:14.075 *********** 2025-06-22 19:59:50.557566 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.557572 | orchestrator | 2025-06-22 19:59:50.557579 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-22 19:59:50.557586 | orchestrator | Sunday 22 June 2025 19:55:54 +0000 (0:00:00.836) 0:02:14.911 *********** 2025-06-22 19:59:50.557593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 19:59:50.557600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 19:59:50.557607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 19:59:50.557615 | orchestrator | 2025-06-22 19:59:50.557621 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-22 19:59:50.557628 | orchestrator | Sunday 22 June 2025 19:55:57 +0000 (0:00:03.504) 0:02:18.416 *********** 2025-06-22 19:59:50.557642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 19:59:50.557655 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.557662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 19:59:50.557669 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.557676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 19:59:50.557683 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.557690 | orchestrator | 2025-06-22 19:59:50.557697 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-22 19:59:50.557704 | orchestrator | Sunday 22 June 2025 19:55:57 +0000 (0:00:00.390) 0:02:18.807 *********** 2025-06-22 19:59:50.557710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:59:50.557717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:59:50.557724 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.557731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:59:50.557738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:59:50.557744 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.557751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:59:50.557758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:59:50.557764 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.557771 | orchestrator | 2025-06-22 19:59:50.557778 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-22 19:59:50.557784 | orchestrator | Sunday 22 June 2025 19:55:58 +0000 (0:00:00.782) 0:02:19.589 *********** 2025-06-22 19:59:50.557791 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.557798 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.557810 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.557816 | orchestrator | 2025-06-22 19:59:50.557823 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-22 19:59:50.557830 | orchestrator | Sunday 22 June 2025 19:56:00 +0000 (0:00:01.540) 0:02:21.129 *********** 2025-06-22 19:59:50.557836 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.557843 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.557849 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.557856 | orchestrator | 2025-06-22 19:59:50.557865 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-22 19:59:50.557872 | orchestrator | Sunday 22 June 2025 19:56:02 +0000 (0:00:02.023) 0:02:23.153 *********** 2025-06-22 19:59:50.557879 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.557885 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.557974 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.557984 | orchestrator | 2025-06-22 19:59:50.557990 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-22 19:59:50.557997 | orchestrator | Sunday 22 June 2025 19:56:02 +0000 (0:00:00.303) 0:02:23.457 *********** 2025-06-22 19:59:50.558004 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.558011 | orchestrator | 2025-06-22 19:59:50.558048 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-22 19:59:50.558055 | orchestrator | Sunday 22 June 2025 19:56:03 +0000 (0:00:00.872) 0:02:24.329 *********** 2025-06-22 19:59:50.558063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 19:59:50.558081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 19:59:50.558094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 19:59:50.558102 | orchestrator | 2025-06-22 19:59:50.558112 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-22 19:59:50.558119 | orchestrator | Sunday 22 June 2025 19:56:07 +0000 (0:00:03.698) 0:02:28.027 *********** 2025-06-22 19:59:50.558134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 19:59:50.558142 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.558150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 19:59:50.558161 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.558176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 19:59:50.558184 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.558206 | orchestrator | 2025-06-22 19:59:50.558213 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-22 19:59:50.558219 | orchestrator | Sunday 22 June 2025 19:56:07 +0000 (0:00:00.636) 0:02:28.663 *********** 2025-06-22 19:59:50.558227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:59:50.558234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:59:50.558241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:59:50.558252 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:59:50.558259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-22 19:59:50.558266 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.558273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:59:50.558280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:59:50.558293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:59:50.558301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:59:50.558307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-22 19:59:50.558314 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.558321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:59:50.558328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:59:50.558335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:59:50.558342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:59:50.558349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-22 19:59:50.558359 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.558366 | orchestrator | 2025-06-22 19:59:50.558373 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-22 19:59:50.558380 | orchestrator | Sunday 22 June 2025 19:56:08 +0000 (0:00:01.007) 0:02:29.671 *********** 2025-06-22 19:59:50.558386 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.558393 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.558399 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.558406 | orchestrator | 2025-06-22 19:59:50.558413 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-22 19:59:50.558419 | orchestrator | Sunday 22 June 2025 19:56:10 +0000 (0:00:01.736) 0:02:31.408 *********** 2025-06-22 19:59:50.558426 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.558432 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.558439 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.558446 | orchestrator | 2025-06-22 19:59:50.558452 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-22 19:59:50.558459 | orchestrator | Sunday 22 June 2025 19:56:12 +0000 (0:00:02.298) 0:02:33.706 *********** 2025-06-22 19:59:50.558466 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.558472 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.558479 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.558486 | orchestrator | 2025-06-22 19:59:50.558492 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-22 19:59:50.558499 | orchestrator | Sunday 22 June 2025 19:56:13 +0000 (0:00:00.417) 0:02:34.124 *********** 2025-06-22 19:59:50.558505 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.558512 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.558519 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.558525 | orchestrator | 2025-06-22 19:59:50.558532 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-22 19:59:50.558538 | orchestrator | Sunday 22 June 2025 19:56:13 +0000 (0:00:00.330) 0:02:34.454 *********** 2025-06-22 19:59:50.558545 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.558552 | orchestrator | 2025-06-22 19:59:50.558558 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-22 19:59:50.558565 | orchestrator | Sunday 22 June 2025 19:56:14 +0000 (0:00:01.138) 0:02:35.593 *********** 2025-06-22 19:59:50.558579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 19:59:50.558587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:59:50.558599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 19:59:50.558607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:59:50.558615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 19:59:50.558628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:59:50.558636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:59:50.558650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:59:50.558657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:59:50.558664 | orchestrator | 2025-06-22 19:59:50.558671 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-22 19:59:50.558677 | orchestrator | Sunday 22 June 2025 19:56:18 +0000 (0:00:03.550) 0:02:39.143 *********** 2025-06-22 19:59:50.558685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 19:59:50.558695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:59:50.558705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:59:50.558713 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.558720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 19:59:50.558731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:59:50.558738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:59:50.558745 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.558752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 19:59:50.558765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:59:50.558773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:59:50.558783 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.558790 | orchestrator | 2025-06-22 19:59:50.558797 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-22 19:59:50.558804 | orchestrator | Sunday 22 June 2025 19:56:18 +0000 (0:00:00.598) 0:02:39.741 *********** 2025-06-22 19:59:50.558811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:59:50.558818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:59:50.558825 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.558832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:59:50.558839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:59:50.558846 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.558853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:59:50.558860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:59:50.558866 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.558873 | orchestrator | 2025-06-22 19:59:50.558880 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-22 19:59:50.558886 | orchestrator | Sunday 22 June 2025 19:56:19 +0000 (0:00:01.026) 0:02:40.767 *********** 2025-06-22 19:59:50.558893 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.558899 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.558906 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.558913 | orchestrator | 2025-06-22 19:59:50.558919 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-22 19:59:50.558926 | orchestrator | Sunday 22 June 2025 19:56:21 +0000 (0:00:01.283) 0:02:42.051 *********** 2025-06-22 19:59:50.558932 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.558939 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.558946 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.558952 | orchestrator | 2025-06-22 19:59:50.558959 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-22 19:59:50.558966 | orchestrator | Sunday 22 June 2025 19:56:23 +0000 (0:00:02.139) 0:02:44.191 *********** 2025-06-22 19:59:50.558972 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.558979 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.558985 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.558996 | orchestrator | 2025-06-22 19:59:50.559002 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-22 19:59:50.559009 | orchestrator | Sunday 22 June 2025 19:56:23 +0000 (0:00:00.316) 0:02:44.507 *********** 2025-06-22 19:59:50.559016 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.559022 | orchestrator | 2025-06-22 19:59:50.559031 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-22 19:59:50.559038 | orchestrator | Sunday 22 June 2025 19:56:24 +0000 (0:00:01.179) 0:02:45.687 *********** 2025-06-22 19:59:50.559049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 19:59:50.559057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 19:59:50.559072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 19:59:50.559096 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559103 | orchestrator | 2025-06-22 19:59:50.559109 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-22 19:59:50.559120 | orchestrator | Sunday 22 June 2025 19:56:28 +0000 (0:00:03.809) 0:02:49.497 *********** 2025-06-22 19:59:50.559131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 19:59:50.559143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559154 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.559166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 19:59:50.559211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559223 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.559234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 19:59:50.559246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559256 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.559265 | orchestrator | 2025-06-22 19:59:50.559276 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-22 19:59:50.559286 | orchestrator | Sunday 22 June 2025 19:56:29 +0000 (0:00:00.634) 0:02:50.132 *********** 2025-06-22 19:59:50.559369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:59:50.559378 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:59:50.559385 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.559392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:59:50.559398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:59:50.559411 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.559418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:59:50.559425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:59:50.559431 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.559438 | orchestrator | 2025-06-22 19:59:50.559445 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-22 19:59:50.559451 | orchestrator | Sunday 22 June 2025 19:56:30 +0000 (0:00:01.382) 0:02:51.514 *********** 2025-06-22 19:59:50.559458 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.559464 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.559471 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.559478 | orchestrator | 2025-06-22 19:59:50.559484 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-22 19:59:50.559491 | orchestrator | Sunday 22 June 2025 19:56:31 +0000 (0:00:01.270) 0:02:52.785 *********** 2025-06-22 19:59:50.559498 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.559504 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.559511 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.559518 | orchestrator | 2025-06-22 19:59:50.559528 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-22 19:59:50.559535 | orchestrator | Sunday 22 June 2025 19:56:33 +0000 (0:00:01.981) 0:02:54.766 *********** 2025-06-22 19:59:50.559547 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.559554 | orchestrator | 2025-06-22 19:59:50.559560 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-22 19:59:50.559567 | orchestrator | Sunday 22 June 2025 19:56:34 +0000 (0:00:01.036) 0:02:55.802 *********** 2025-06-22 19:59:50.559574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-22 19:59:50.559582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-22 19:59:50.559622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-22 19:59:50.559658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559686 | orchestrator | 2025-06-22 19:59:50.559693 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-22 19:59:50.559700 | orchestrator | Sunday 22 June 2025 19:56:38 +0000 (0:00:03.989) 0:02:59.791 *********** 2025-06-22 19:59:50.559707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-22 19:59:50.559714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559739 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.559746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-22 19:59:50.559757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559784 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.559805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-22 19:59:50.559812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.559938 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.559945 | orchestrator | 2025-06-22 19:59:50.559951 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-22 19:59:50.559958 | orchestrator | Sunday 22 June 2025 19:56:39 +0000 (0:00:00.797) 0:03:00.589 *********** 2025-06-22 19:59:50.559965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:59:50.559972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:59:50.559979 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.559986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:59:50.559998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:59:50.560005 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.560012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:59:50.560019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:59:50.560026 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.560032 | orchestrator | 2025-06-22 19:59:50.560039 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-22 19:59:50.560046 | orchestrator | Sunday 22 June 2025 19:56:40 +0000 (0:00:01.102) 0:03:01.691 *********** 2025-06-22 19:59:50.560053 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.560059 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.560066 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.560073 | orchestrator | 2025-06-22 19:59:50.560079 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-22 19:59:50.560086 | orchestrator | Sunday 22 June 2025 19:56:42 +0000 (0:00:01.753) 0:03:03.445 *********** 2025-06-22 19:59:50.560093 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.560100 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.560106 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.560113 | orchestrator | 2025-06-22 19:59:50.560119 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-22 19:59:50.560126 | orchestrator | Sunday 22 June 2025 19:56:44 +0000 (0:00:02.250) 0:03:05.695 *********** 2025-06-22 19:59:50.560133 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.560140 | orchestrator | 2025-06-22 19:59:50.560146 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-22 19:59:50.560153 | orchestrator | Sunday 22 June 2025 19:56:45 +0000 (0:00:01.089) 0:03:06.785 *********** 2025-06-22 19:59:50.560160 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 19:59:50.560167 | orchestrator | 2025-06-22 19:59:50.560173 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-22 19:59:50.560180 | orchestrator | Sunday 22 June 2025 19:56:49 +0000 (0:00:03.321) 0:03:10.106 *********** 2025-06-22 19:59:50.560210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.560224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:59:50.560231 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.560238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.560246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:59:50.560253 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.560268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.560280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:59:50.560288 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.560294 | orchestrator | 2025-06-22 19:59:50.560301 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-22 19:59:50.560308 | orchestrator | Sunday 22 June 2025 19:56:51 +0000 (0:00:02.326) 0:03:12.433 *********** 2025-06-22 19:59:50.560318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.560333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:59:50.560341 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.560348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.560356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:59:50.560362 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.560376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:50.560388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:59:50.560395 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.560402 | orchestrator | 2025-06-22 19:59:50.560408 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-22 19:59:50.560415 | orchestrator | Sunday 22 June 2025 19:56:53 +0000 (0:00:02.073) 0:03:14.506 *********** 2025-06-22 19:59:50.560422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:59:50.560429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:59:50.560436 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.560443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:59:50.560455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:59:50.560466 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.560477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:59:50.560484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:59:50.560491 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.560497 | orchestrator | 2025-06-22 19:59:50.560504 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-22 19:59:50.560511 | orchestrator | Sunday 22 June 2025 19:56:56 +0000 (0:00:02.434) 0:03:16.941 *********** 2025-06-22 19:59:50.560518 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.560524 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.560531 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.560537 | orchestrator | 2025-06-22 19:59:50.560545 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-22 19:59:50.560553 | orchestrator | Sunday 22 June 2025 19:56:58 +0000 (0:00:01.951) 0:03:18.892 *********** 2025-06-22 19:59:50.560560 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.560568 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.560575 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.560582 | orchestrator | 2025-06-22 19:59:50.560590 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-22 19:59:50.560597 | orchestrator | Sunday 22 June 2025 19:56:59 +0000 (0:00:01.224) 0:03:20.117 *********** 2025-06-22 19:59:50.560604 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.560612 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.560619 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.560627 | orchestrator | 2025-06-22 19:59:50.560635 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-22 19:59:50.560642 | orchestrator | Sunday 22 June 2025 19:56:59 +0000 (0:00:00.263) 0:03:20.381 *********** 2025-06-22 19:59:50.560650 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.560657 | orchestrator | 2025-06-22 19:59:50.560665 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-22 19:59:50.560672 | orchestrator | Sunday 22 June 2025 19:57:00 +0000 (0:00:01.056) 0:03:21.437 *********** 2025-06-22 19:59:50.560681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-22 19:59:50.560694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-22 19:59:50.560709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-22 19:59:50.560718 | orchestrator | 2025-06-22 19:59:50.560725 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-22 19:59:50.560733 | orchestrator | Sunday 22 June 2025 19:57:02 +0000 (0:00:01.794) 0:03:23.231 *********** 2025-06-22 19:59:50.560741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-22 19:59:50.560750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-22 19:59:50.560758 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.560765 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.560777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-22 19:59:50.560785 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.560793 | orchestrator | 2025-06-22 19:59:50.560800 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-22 19:59:50.560808 | orchestrator | Sunday 22 June 2025 19:57:02 +0000 (0:00:00.389) 0:03:23.621 *********** 2025-06-22 19:59:50.560816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-22 19:59:50.560824 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.560831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-22 19:59:50.560838 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.560851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-22 19:59:50.560858 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.560865 | orchestrator | 2025-06-22 19:59:50.560872 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-22 19:59:50.560878 | orchestrator | Sunday 22 June 2025 19:57:03 +0000 (0:00:00.588) 0:03:24.209 *********** 2025-06-22 19:59:50.560885 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.560892 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.560898 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.560905 | orchestrator | 2025-06-22 19:59:50.560912 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-22 19:59:50.560919 | orchestrator | Sunday 22 June 2025 19:57:04 +0000 (0:00:00.748) 0:03:24.958 *********** 2025-06-22 19:59:50.560925 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.560932 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.560939 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.560945 | orchestrator | 2025-06-22 19:59:50.560952 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-22 19:59:50.560958 | orchestrator | Sunday 22 June 2025 19:57:05 +0000 (0:00:01.255) 0:03:26.214 *********** 2025-06-22 19:59:50.560965 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.560972 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.560978 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.560985 | orchestrator | 2025-06-22 19:59:50.560992 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-22 19:59:50.560998 | orchestrator | Sunday 22 June 2025 19:57:05 +0000 (0:00:00.321) 0:03:26.535 *********** 2025-06-22 19:59:50.561005 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.561011 | orchestrator | 2025-06-22 19:59:50.561018 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-22 19:59:50.561029 | orchestrator | Sunday 22 June 2025 19:57:07 +0000 (0:00:01.389) 0:03:27.924 *********** 2025-06-22 19:59:50.561037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 19:59:50.561044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 19:59:50.561054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 19:59:50.561066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:59:50.561156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:59:50.561167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:59:50.561179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:50.561334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:50.561345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:50.561367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:59:50.561462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:59:50.561477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:50.561487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:59:50.561504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:50.561511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:50.561525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561539 | orchestrator | 2025-06-22 19:59:50.561545 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-22 19:59:50.561551 | orchestrator | Sunday 22 June 2025 19:57:11 +0000 (0:00:04.404) 0:03:32.328 *********** 2025-06-22 19:59:50.561652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 19:59:50.561667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:59:50.561697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 19:59:50.561745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:50.561845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 19:59:50.561851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:59:50.561858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.561942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561949 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.561961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.562011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.562055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.562063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:59:50.562069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.562076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:59:50.562091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.562143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:50.562152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:50.562159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.562166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.562173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.562184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.562203 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.562214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:59:50.562272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.562282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.562289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:50.562296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.562307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.562336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:59:50.562344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:59:50.562351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:50.562357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:50.562364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.562374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.562381 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.562407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:59:50.562415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:50.562421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.562428 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.562434 | orchestrator | 2025-06-22 19:59:50.562440 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-22 19:59:50.562447 | orchestrator | Sunday 22 June 2025 19:57:12 +0000 (0:00:01.324) 0:03:33.653 *********** 2025-06-22 19:59:50.562453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:59:50.562460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:59:50.562471 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.562477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:59:50.562483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:59:50.562490 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.562496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:59:50.562502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:59:50.562509 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.562515 | orchestrator | 2025-06-22 19:59:50.562521 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-22 19:59:50.562528 | orchestrator | Sunday 22 June 2025 19:57:14 +0000 (0:00:01.717) 0:03:35.370 *********** 2025-06-22 19:59:50.562534 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.562540 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.562546 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.562552 | orchestrator | 2025-06-22 19:59:50.562559 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-22 19:59:50.562565 | orchestrator | Sunday 22 June 2025 19:57:15 +0000 (0:00:01.254) 0:03:36.625 *********** 2025-06-22 19:59:50.562571 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.562577 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.562583 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.562590 | orchestrator | 2025-06-22 19:59:50.562596 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-22 19:59:50.562602 | orchestrator | Sunday 22 June 2025 19:57:17 +0000 (0:00:02.088) 0:03:38.714 *********** 2025-06-22 19:59:50.562608 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.562614 | orchestrator | 2025-06-22 19:59:50.562621 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-22 19:59:50.562627 | orchestrator | Sunday 22 June 2025 19:57:18 +0000 (0:00:01.157) 0:03:39.871 *********** 2025-06-22 19:59:50.562652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.562660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.562671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.562677 | orchestrator | 2025-06-22 19:59:50.562684 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-22 19:59:50.562690 | orchestrator | Sunday 22 June 2025 19:57:22 +0000 (0:00:03.340) 0:03:43.211 *********** 2025-06-22 19:59:50.562709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.562716 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.562743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.562751 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.562757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.562770 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.562777 | orchestrator | 2025-06-22 19:59:50.562783 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-22 19:59:50.562789 | orchestrator | Sunday 22 June 2025 19:57:22 +0000 (0:00:00.516) 0:03:43.727 *********** 2025-06-22 19:59:50.562796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:59:50.562802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:59:50.562809 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.562815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:59:50.562821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:59:50.562828 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.562834 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:59:50.562841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:59:50.562847 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.562853 | orchestrator | 2025-06-22 19:59:50.562860 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-22 19:59:50.562866 | orchestrator | Sunday 22 June 2025 19:57:23 +0000 (0:00:00.750) 0:03:44.477 *********** 2025-06-22 19:59:50.562872 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.562878 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.562884 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.562890 | orchestrator | 2025-06-22 19:59:50.562898 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-22 19:59:50.562905 | orchestrator | Sunday 22 June 2025 19:57:25 +0000 (0:00:01.688) 0:03:46.166 *********** 2025-06-22 19:59:50.562912 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.562919 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.562926 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.562933 | orchestrator | 2025-06-22 19:59:50.562940 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-22 19:59:50.562947 | orchestrator | Sunday 22 June 2025 19:57:27 +0000 (0:00:02.079) 0:03:48.245 *********** 2025-06-22 19:59:50.562954 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.562961 | orchestrator | 2025-06-22 19:59:50.562971 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-22 19:59:50.562978 | orchestrator | Sunday 22 June 2025 19:57:28 +0000 (0:00:01.243) 0:03:49.488 *********** 2025-06-22 19:59:50.563003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.563016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.563024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.563032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.563060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.563072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.563080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.563088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.563095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.563102 | orchestrator | 2025-06-22 19:59:50.563109 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-22 19:59:50.563116 | orchestrator | Sunday 22 June 2025 19:57:33 +0000 (0:00:04.403) 0:03:53.892 *********** 2025-06-22 19:59:50.563143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.563156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.563163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.563171 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.563179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.563222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.563234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.563246 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.563275 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.563283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.563289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.563296 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.563302 | orchestrator | 2025-06-22 19:59:50.563308 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-22 19:59:50.563315 | orchestrator | Sunday 22 June 2025 19:57:33 +0000 (0:00:00.931) 0:03:54.824 *********** 2025-06-22 19:59:50.563321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:59:50.563329 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:59:50.563336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:59:50.563342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:59:50.563353 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.563359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:59:50.563368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:59:50.563391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:59:50.563398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:59:50.563405 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.563411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:59:50.563417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:59:50.563424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:59:50.563430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:59:50.563436 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.563443 | orchestrator | 2025-06-22 19:59:50.563449 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-22 19:59:50.563455 | orchestrator | Sunday 22 June 2025 19:57:34 +0000 (0:00:00.863) 0:03:55.687 *********** 2025-06-22 19:59:50.563461 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.563467 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.563474 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.563480 | orchestrator | 2025-06-22 19:59:50.563486 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-22 19:59:50.563492 | orchestrator | Sunday 22 June 2025 19:57:36 +0000 (0:00:01.660) 0:03:57.348 *********** 2025-06-22 19:59:50.563498 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.563504 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.563511 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.563517 | orchestrator | 2025-06-22 19:59:50.563523 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-22 19:59:50.563530 | orchestrator | Sunday 22 June 2025 19:57:38 +0000 (0:00:02.157) 0:03:59.506 *********** 2025-06-22 19:59:50.563536 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.563542 | orchestrator | 2025-06-22 19:59:50.563548 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-22 19:59:50.563555 | orchestrator | Sunday 22 June 2025 19:57:40 +0000 (0:00:01.520) 0:04:01.026 *********** 2025-06-22 19:59:50.563561 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-22 19:59:50.563572 | orchestrator | 2025-06-22 19:59:50.563578 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-22 19:59:50.563584 | orchestrator | Sunday 22 June 2025 19:57:41 +0000 (0:00:01.069) 0:04:02.096 *********** 2025-06-22 19:59:50.563591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-22 19:59:50.563597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-22 19:59:50.563607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-22 19:59:50.563614 | orchestrator | 2025-06-22 19:59:50.563638 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-22 19:59:50.563645 | orchestrator | Sunday 22 June 2025 19:57:45 +0000 (0:00:04.012) 0:04:06.108 *********** 2025-06-22 19:59:50.563652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:50.563658 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.563665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:50.563671 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.563678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:50.563684 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.563691 | orchestrator | 2025-06-22 19:59:50.563697 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-22 19:59:50.563708 | orchestrator | Sunday 22 June 2025 19:57:46 +0000 (0:00:01.351) 0:04:07.460 *********** 2025-06-22 19:59:50.563714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:59:50.563721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:59:50.563728 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.563734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:59:50.563741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:59:50.563747 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.563753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:59:50.563760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:59:50.563766 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.563773 | orchestrator | 2025-06-22 19:59:50.563779 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-22 19:59:50.563784 | orchestrator | Sunday 22 June 2025 19:57:48 +0000 (0:00:01.811) 0:04:09.271 *********** 2025-06-22 19:59:50.563790 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.563795 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.563801 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.563806 | orchestrator | 2025-06-22 19:59:50.563814 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-22 19:59:50.563820 | orchestrator | Sunday 22 June 2025 19:57:50 +0000 (0:00:02.370) 0:04:11.642 *********** 2025-06-22 19:59:50.563826 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.563831 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.563837 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.563842 | orchestrator | 2025-06-22 19:59:50.563862 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-22 19:59:50.563868 | orchestrator | Sunday 22 June 2025 19:57:53 +0000 (0:00:03.208) 0:04:14.850 *********** 2025-06-22 19:59:50.563874 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-22 19:59:50.563880 | orchestrator | 2025-06-22 19:59:50.563885 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-22 19:59:50.563891 | orchestrator | Sunday 22 June 2025 19:57:54 +0000 (0:00:00.916) 0:04:15.766 *********** 2025-06-22 19:59:50.563896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:50.563908 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.563914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:50.563920 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.563926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:50.563931 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.563937 | orchestrator | 2025-06-22 19:59:50.563942 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-22 19:59:50.563948 | orchestrator | Sunday 22 June 2025 19:57:56 +0000 (0:00:01.775) 0:04:17.542 *********** 2025-06-22 19:59:50.563953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:50.563959 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.563965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:50.563971 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.563979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:50.563985 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.563990 | orchestrator | 2025-06-22 19:59:50.564010 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-22 19:59:50.564016 | orchestrator | Sunday 22 June 2025 19:57:58 +0000 (0:00:02.213) 0:04:19.755 *********** 2025-06-22 19:59:50.564022 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.564027 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.564033 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.564038 | orchestrator | 2025-06-22 19:59:50.564044 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-22 19:59:50.564054 | orchestrator | Sunday 22 June 2025 19:58:00 +0000 (0:00:01.737) 0:04:21.493 *********** 2025-06-22 19:59:50.564059 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.564065 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.564070 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.564076 | orchestrator | 2025-06-22 19:59:50.564081 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-22 19:59:50.564087 | orchestrator | Sunday 22 June 2025 19:58:03 +0000 (0:00:02.402) 0:04:23.896 *********** 2025-06-22 19:59:50.564092 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.564097 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.564103 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.564108 | orchestrator | 2025-06-22 19:59:50.564114 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-22 19:59:50.564119 | orchestrator | Sunday 22 June 2025 19:58:05 +0000 (0:00:02.966) 0:04:26.863 *********** 2025-06-22 19:59:50.564125 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-22 19:59:50.564131 | orchestrator | 2025-06-22 19:59:50.564136 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-22 19:59:50.564142 | orchestrator | Sunday 22 June 2025 19:58:06 +0000 (0:00:00.823) 0:04:27.686 *********** 2025-06-22 19:59:50.564147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:59:50.564431 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.564444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:59:50.564451 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.564457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:59:50.564463 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.564468 | orchestrator | 2025-06-22 19:59:50.564474 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-22 19:59:50.564480 | orchestrator | Sunday 22 June 2025 19:58:07 +0000 (0:00:00.972) 0:04:28.658 *********** 2025-06-22 19:59:50.564485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:59:50.564497 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.564532 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:59:50.564539 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.564545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:59:50.564551 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.564556 | orchestrator | 2025-06-22 19:59:50.564562 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-22 19:59:50.564567 | orchestrator | Sunday 22 June 2025 19:58:09 +0000 (0:00:01.396) 0:04:30.055 *********** 2025-06-22 19:59:50.564573 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.564578 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.564584 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.564589 | orchestrator | 2025-06-22 19:59:50.564595 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-22 19:59:50.564600 | orchestrator | Sunday 22 June 2025 19:58:10 +0000 (0:00:01.412) 0:04:31.468 *********** 2025-06-22 19:59:50.564606 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.564611 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.564617 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.564622 | orchestrator | 2025-06-22 19:59:50.564628 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-22 19:59:50.564633 | orchestrator | Sunday 22 June 2025 19:58:12 +0000 (0:00:02.356) 0:04:33.825 *********** 2025-06-22 19:59:50.564639 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.564644 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.564650 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.564655 | orchestrator | 2025-06-22 19:59:50.564661 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-22 19:59:50.564666 | orchestrator | Sunday 22 June 2025 19:58:16 +0000 (0:00:03.184) 0:04:37.010 *********** 2025-06-22 19:59:50.564672 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.564677 | orchestrator | 2025-06-22 19:59:50.564683 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-22 19:59:50.564688 | orchestrator | Sunday 22 June 2025 19:58:17 +0000 (0:00:01.273) 0:04:38.283 *********** 2025-06-22 19:59:50.564694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.564704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:59:50.564713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.564735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.564741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.564747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.564753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:59:50.564762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.564768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.564791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.564798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.564804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:59:50.564810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.564819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.564825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.564830 | orchestrator | 2025-06-22 19:59:50.564836 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-22 19:59:50.564841 | orchestrator | Sunday 22 June 2025 19:58:20 +0000 (0:00:03.559) 0:04:41.843 *********** 2025-06-22 19:59:50.564864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.564871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:59:50.564876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.564882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.564891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.564897 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.564906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.564926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:59:50.564932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.564938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.564944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.564953 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.564959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.564964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:59:50.564987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.564993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:59:50.564999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:50.565004 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.565010 | orchestrator | 2025-06-22 19:59:50.565016 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-22 19:59:50.565021 | orchestrator | Sunday 22 June 2025 19:58:21 +0000 (0:00:00.691) 0:04:42.534 *********** 2025-06-22 19:59:50.565027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:59:50.565038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:59:50.565044 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.565050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:59:50.565055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:59:50.565061 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.565066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:59:50.565072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:59:50.565077 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.565082 | orchestrator | 2025-06-22 19:59:50.565088 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-22 19:59:50.565093 | orchestrator | Sunday 22 June 2025 19:58:22 +0000 (0:00:00.941) 0:04:43.476 *********** 2025-06-22 19:59:50.565099 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.565104 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.565109 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.565115 | orchestrator | 2025-06-22 19:59:50.565120 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-22 19:59:50.565126 | orchestrator | Sunday 22 June 2025 19:58:24 +0000 (0:00:01.727) 0:04:45.204 *********** 2025-06-22 19:59:50.565131 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.565136 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.565142 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.565147 | orchestrator | 2025-06-22 19:59:50.565152 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-22 19:59:50.565158 | orchestrator | Sunday 22 June 2025 19:58:26 +0000 (0:00:02.050) 0:04:47.255 *********** 2025-06-22 19:59:50.565163 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.565169 | orchestrator | 2025-06-22 19:59:50.565174 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-22 19:59:50.565182 | orchestrator | Sunday 22 June 2025 19:58:27 +0000 (0:00:01.323) 0:04:48.578 *********** 2025-06-22 19:59:50.565252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:50.565260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:50.565270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:50.565277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:50.565301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:50.565309 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:50.565319 | orchestrator | 2025-06-22 19:59:50.565325 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-22 19:59:50.565330 | orchestrator | Sunday 22 June 2025 19:58:32 +0000 (0:00:05.284) 0:04:53.863 *********** 2025-06-22 19:59:50.565336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:59:50.565342 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:59:50.565348 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.565370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:59:50.565377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:59:50.565387 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.565393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:59:50.565399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:59:50.565405 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.565410 | orchestrator | 2025-06-22 19:59:50.565415 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-22 19:59:50.565421 | orchestrator | Sunday 22 June 2025 19:58:34 +0000 (0:00:01.037) 0:04:54.900 *********** 2025-06-22 19:59:50.565426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-22 19:59:50.565431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:59:50.565455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:59:50.565461 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.565466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-22 19:59:50.565474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:59:50.565479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:59:50.565484 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.565489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-22 19:59:50.565494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:59:50.565499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:59:50.565504 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.565508 | orchestrator | 2025-06-22 19:59:50.565513 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-22 19:59:50.565518 | orchestrator | Sunday 22 June 2025 19:58:34 +0000 (0:00:00.877) 0:04:55.777 *********** 2025-06-22 19:59:50.565523 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.565528 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.565532 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.565537 | orchestrator | 2025-06-22 19:59:50.565542 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-22 19:59:50.565547 | orchestrator | Sunday 22 June 2025 19:58:35 +0000 (0:00:00.449) 0:04:56.226 *********** 2025-06-22 19:59:50.565551 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.565556 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.565561 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.565566 | orchestrator | 2025-06-22 19:59:50.565570 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-22 19:59:50.565575 | orchestrator | Sunday 22 June 2025 19:58:36 +0000 (0:00:01.395) 0:04:57.622 *********** 2025-06-22 19:59:50.565580 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.565585 | orchestrator | 2025-06-22 19:59:50.565590 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-22 19:59:50.565594 | orchestrator | Sunday 22 June 2025 19:58:38 +0000 (0:00:01.656) 0:04:59.279 *********** 2025-06-22 19:59:50.565599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 19:59:50.565607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 19:59:50.565630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:59:50.565636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:59:50.565642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:59:50.565674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:59:50.565694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 19:59:50.565700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:59:50.565705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:59:50.565721 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 19:59:50.565735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:59:50.565740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 19:59:50.565746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:59:50.565751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:59:50.565783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:59:50.565793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 19:59:50.565799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:59:50.565808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:59:50.565828 | orchestrator | 2025-06-22 19:59:50.565833 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-22 19:59:50.565838 | orchestrator | Sunday 22 June 2025 19:58:42 +0000 (0:00:04.365) 0:05:03.645 *********** 2025-06-22 19:59:50.565843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 19:59:50.565848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:59:50.565853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:59:50.565877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 19:59:50.565883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:59:50.565888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565893 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:59:50.565906 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.565911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 19:59:50.565921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:59:50.565927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565937 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:59:50.565942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 19:59:50.565951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:59:50.565959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 19:59:50.565972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.565977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:59:50.565982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:59:50.565990 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.565995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.566000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.566005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:59:50.566035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 19:59:50.566042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:59:50.566048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.566058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:50.566063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:59:50.566068 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566073 | orchestrator | 2025-06-22 19:59:50.566077 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-22 19:59:50.566082 | orchestrator | Sunday 22 June 2025 19:58:44 +0000 (0:00:01.617) 0:05:05.263 *********** 2025-06-22 19:59:50.566087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-22 19:59:50.566093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-22 19:59:50.566098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:59:50.566109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:59:50.566114 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-22 19:59:50.566124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-22 19:59:50.566129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:59:50.566134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:59:50.566144 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-22 19:59:50.566154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-22 19:59:50.566159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:59:50.566164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:59:50.566169 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566173 | orchestrator | 2025-06-22 19:59:50.566179 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-22 19:59:50.566183 | orchestrator | Sunday 22 June 2025 19:58:45 +0000 (0:00:00.983) 0:05:06.247 *********** 2025-06-22 19:59:50.566202 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566207 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566211 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566216 | orchestrator | 2025-06-22 19:59:50.566221 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-22 19:59:50.566226 | orchestrator | Sunday 22 June 2025 19:58:45 +0000 (0:00:00.428) 0:05:06.676 *********** 2025-06-22 19:59:50.566231 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566236 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566241 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566245 | orchestrator | 2025-06-22 19:59:50.566250 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-22 19:59:50.566255 | orchestrator | Sunday 22 June 2025 19:58:47 +0000 (0:00:01.623) 0:05:08.299 *********** 2025-06-22 19:59:50.566260 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.566265 | orchestrator | 2025-06-22 19:59:50.566270 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-22 19:59:50.566275 | orchestrator | Sunday 22 June 2025 19:58:49 +0000 (0:00:01.664) 0:05:09.964 *********** 2025-06-22 19:59:50.566283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:59:50.566301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:59:50.566311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:59:50.566317 | orchestrator | 2025-06-22 19:59:50.566321 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-22 19:59:50.566326 | orchestrator | Sunday 22 June 2025 19:58:51 +0000 (0:00:02.341) 0:05:12.305 *********** 2025-06-22 19:59:50.566331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-22 19:59:50.566337 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-22 19:59:50.566352 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-22 19:59:50.566366 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566373 | orchestrator | 2025-06-22 19:59:50.566378 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-22 19:59:50.566383 | orchestrator | Sunday 22 June 2025 19:58:51 +0000 (0:00:00.374) 0:05:12.680 *********** 2025-06-22 19:59:50.566388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-22 19:59:50.566393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-22 19:59:50.566398 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566403 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-22 19:59:50.566413 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566417 | orchestrator | 2025-06-22 19:59:50.566422 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-22 19:59:50.566427 | orchestrator | Sunday 22 June 2025 19:58:52 +0000 (0:00:00.992) 0:05:13.672 *********** 2025-06-22 19:59:50.566432 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566436 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566441 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566446 | orchestrator | 2025-06-22 19:59:50.566451 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-22 19:59:50.566455 | orchestrator | Sunday 22 June 2025 19:58:53 +0000 (0:00:00.458) 0:05:14.131 *********** 2025-06-22 19:59:50.566460 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566465 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566470 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566474 | orchestrator | 2025-06-22 19:59:50.566479 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-22 19:59:50.566484 | orchestrator | Sunday 22 June 2025 19:58:54 +0000 (0:00:01.306) 0:05:15.438 *********** 2025-06-22 19:59:50.566489 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:50.566493 | orchestrator | 2025-06-22 19:59:50.566498 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-22 19:59:50.566503 | orchestrator | Sunday 22 June 2025 19:58:56 +0000 (0:00:01.739) 0:05:17.178 *********** 2025-06-22 19:59:50.566508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.566522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.566527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.566532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.566538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.566553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:50.566558 | orchestrator | 2025-06-22 19:59:50.566563 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-22 19:59:50.566568 | orchestrator | Sunday 22 June 2025 19:59:02 +0000 (0:00:05.832) 0:05:23.010 *********** 2025-06-22 19:59:50.566573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.566578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.566583 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.566603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.566609 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.566619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-22 19:59:50.566624 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566628 | orchestrator | 2025-06-22 19:59:50.566633 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-22 19:59:50.566638 | orchestrator | Sunday 22 June 2025 19:59:02 +0000 (0:00:00.643) 0:05:23.654 *********** 2025-06-22 19:59:50.566643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:59:50.566648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:59:50.566653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:59:50.566661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:59:50.566666 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:59:50.566676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:59:50.566681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:59:50.566688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:59:50.566693 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:59:50.566706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:59:50.566711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:59:50.566715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:59:50.566720 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566725 | orchestrator | 2025-06-22 19:59:50.566730 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-22 19:59:50.566735 | orchestrator | Sunday 22 June 2025 19:59:04 +0000 (0:00:01.658) 0:05:25.312 *********** 2025-06-22 19:59:50.566740 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.566744 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.566749 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.566754 | orchestrator | 2025-06-22 19:59:50.566759 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-22 19:59:50.566763 | orchestrator | Sunday 22 June 2025 19:59:05 +0000 (0:00:01.345) 0:05:26.658 *********** 2025-06-22 19:59:50.566768 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.566773 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.566778 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.566783 | orchestrator | 2025-06-22 19:59:50.566787 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-22 19:59:50.566792 | orchestrator | Sunday 22 June 2025 19:59:08 +0000 (0:00:02.250) 0:05:28.908 *********** 2025-06-22 19:59:50.566797 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566802 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566807 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566811 | orchestrator | 2025-06-22 19:59:50.566816 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-22 19:59:50.566825 | orchestrator | Sunday 22 June 2025 19:59:08 +0000 (0:00:00.337) 0:05:29.246 *********** 2025-06-22 19:59:50.566830 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566834 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566839 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566844 | orchestrator | 2025-06-22 19:59:50.566849 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-22 19:59:50.566854 | orchestrator | Sunday 22 June 2025 19:59:08 +0000 (0:00:00.313) 0:05:29.559 *********** 2025-06-22 19:59:50.566858 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566863 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566868 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566873 | orchestrator | 2025-06-22 19:59:50.566877 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-22 19:59:50.566882 | orchestrator | Sunday 22 June 2025 19:59:09 +0000 (0:00:00.654) 0:05:30.214 *********** 2025-06-22 19:59:50.566887 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566892 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566897 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566901 | orchestrator | 2025-06-22 19:59:50.566906 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-22 19:59:50.566911 | orchestrator | Sunday 22 June 2025 19:59:09 +0000 (0:00:00.337) 0:05:30.551 *********** 2025-06-22 19:59:50.566916 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566920 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566925 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566930 | orchestrator | 2025-06-22 19:59:50.566934 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-22 19:59:50.566939 | orchestrator | Sunday 22 June 2025 19:59:09 +0000 (0:00:00.314) 0:05:30.866 *********** 2025-06-22 19:59:50.566944 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.566949 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.566954 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.566958 | orchestrator | 2025-06-22 19:59:50.566963 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-22 19:59:50.566968 | orchestrator | Sunday 22 June 2025 19:59:10 +0000 (0:00:00.898) 0:05:31.764 *********** 2025-06-22 19:59:50.566973 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.566978 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.566982 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.566987 | orchestrator | 2025-06-22 19:59:50.566992 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-22 19:59:50.566997 | orchestrator | Sunday 22 June 2025 19:59:11 +0000 (0:00:00.674) 0:05:32.438 *********** 2025-06-22 19:59:50.567002 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.567006 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.567011 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.567016 | orchestrator | 2025-06-22 19:59:50.567021 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-22 19:59:50.567025 | orchestrator | Sunday 22 June 2025 19:59:11 +0000 (0:00:00.333) 0:05:32.771 *********** 2025-06-22 19:59:50.567030 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.567035 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.567040 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.567044 | orchestrator | 2025-06-22 19:59:50.567052 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-22 19:59:50.567057 | orchestrator | Sunday 22 June 2025 19:59:12 +0000 (0:00:00.919) 0:05:33.691 *********** 2025-06-22 19:59:50.567061 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.567066 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.567073 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.567078 | orchestrator | 2025-06-22 19:59:50.567083 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-22 19:59:50.567088 | orchestrator | Sunday 22 June 2025 19:59:14 +0000 (0:00:01.397) 0:05:35.088 *********** 2025-06-22 19:59:50.567097 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.567102 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.567107 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.567112 | orchestrator | 2025-06-22 19:59:50.567116 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-22 19:59:50.567121 | orchestrator | Sunday 22 June 2025 19:59:15 +0000 (0:00:00.928) 0:05:36.017 *********** 2025-06-22 19:59:50.567126 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.567131 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.567135 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.567140 | orchestrator | 2025-06-22 19:59:50.567145 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-22 19:59:50.567150 | orchestrator | Sunday 22 June 2025 19:59:19 +0000 (0:00:04.601) 0:05:40.618 *********** 2025-06-22 19:59:50.567155 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.567160 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.567164 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.567169 | orchestrator | 2025-06-22 19:59:50.567174 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-22 19:59:50.567179 | orchestrator | Sunday 22 June 2025 19:59:22 +0000 (0:00:02.812) 0:05:43.431 *********** 2025-06-22 19:59:50.567183 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.567202 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.567207 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.567212 | orchestrator | 2025-06-22 19:59:50.567217 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-22 19:59:50.567221 | orchestrator | Sunday 22 June 2025 19:59:30 +0000 (0:00:08.271) 0:05:51.702 *********** 2025-06-22 19:59:50.567226 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.567231 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.567236 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.567241 | orchestrator | 2025-06-22 19:59:50.567245 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-22 19:59:50.567250 | orchestrator | Sunday 22 June 2025 19:59:34 +0000 (0:00:03.752) 0:05:55.454 *********** 2025-06-22 19:59:50.567255 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:50.567260 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:50.567265 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:50.567269 | orchestrator | 2025-06-22 19:59:50.567274 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-22 19:59:50.567279 | orchestrator | Sunday 22 June 2025 19:59:42 +0000 (0:00:08.209) 0:06:03.664 *********** 2025-06-22 19:59:50.567284 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.567288 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.567293 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.567298 | orchestrator | 2025-06-22 19:59:50.567303 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-22 19:59:50.567308 | orchestrator | Sunday 22 June 2025 19:59:43 +0000 (0:00:00.350) 0:06:04.015 *********** 2025-06-22 19:59:50.567312 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.567317 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.567322 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.567327 | orchestrator | 2025-06-22 19:59:50.567332 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-22 19:59:50.567336 | orchestrator | Sunday 22 June 2025 19:59:43 +0000 (0:00:00.692) 0:06:04.708 *********** 2025-06-22 19:59:50.567341 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.567346 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.567351 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.567355 | orchestrator | 2025-06-22 19:59:50.567360 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-22 19:59:50.567365 | orchestrator | Sunday 22 June 2025 19:59:44 +0000 (0:00:00.359) 0:06:05.067 *********** 2025-06-22 19:59:50.567370 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.567380 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.567385 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.567390 | orchestrator | 2025-06-22 19:59:50.567395 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-22 19:59:50.567400 | orchestrator | Sunday 22 June 2025 19:59:44 +0000 (0:00:00.343) 0:06:05.411 *********** 2025-06-22 19:59:50.567404 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.567409 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.567414 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.567419 | orchestrator | 2025-06-22 19:59:50.567423 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-22 19:59:50.567428 | orchestrator | Sunday 22 June 2025 19:59:44 +0000 (0:00:00.364) 0:06:05.776 *********** 2025-06-22 19:59:50.567433 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:50.567438 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:50.567442 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:50.567447 | orchestrator | 2025-06-22 19:59:50.567452 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-22 19:59:50.567457 | orchestrator | Sunday 22 June 2025 19:59:45 +0000 (0:00:00.673) 0:06:06.449 *********** 2025-06-22 19:59:50.567461 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.567466 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.567471 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.567476 | orchestrator | 2025-06-22 19:59:50.567481 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-22 19:59:50.567485 | orchestrator | Sunday 22 June 2025 19:59:46 +0000 (0:00:00.929) 0:06:07.378 *********** 2025-06-22 19:59:50.567490 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:50.567495 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:50.567500 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:50.567504 | orchestrator | 2025-06-22 19:59:50.567512 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:59:50.567517 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-22 19:59:50.567525 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-22 19:59:50.567530 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-22 19:59:50.567535 | orchestrator | 2025-06-22 19:59:50.567540 | orchestrator | 2025-06-22 19:59:50.567545 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:59:50.567550 | orchestrator | Sunday 22 June 2025 19:59:47 +0000 (0:00:00.806) 0:06:08.185 *********** 2025-06-22 19:59:50.567554 | orchestrator | =============================================================================== 2025-06-22 19:59:50.567559 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.27s 2025-06-22 19:59:50.567564 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.21s 2025-06-22 19:59:50.567569 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.83s 2025-06-22 19:59:50.567573 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.28s 2025-06-22 19:59:50.567578 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.99s 2025-06-22 19:59:50.567583 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.75s 2025-06-22 19:59:50.567588 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.64s 2025-06-22 19:59:50.567593 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.60s 2025-06-22 19:59:50.567597 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.40s 2025-06-22 19:59:50.567602 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.40s 2025-06-22 19:59:50.567610 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.37s 2025-06-22 19:59:50.567615 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.25s 2025-06-22 19:59:50.567620 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.22s 2025-06-22 19:59:50.567625 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.01s 2025-06-22 19:59:50.567630 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.99s 2025-06-22 19:59:50.567634 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.98s 2025-06-22 19:59:50.567639 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 3.91s 2025-06-22 19:59:50.567644 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.81s 2025-06-22 19:59:50.567649 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.75s 2025-06-22 19:59:50.567654 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 3.70s 2025-06-22 19:59:50.567658 | orchestrator | 2025-06-22 19:59:50 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 19:59:50.567663 | orchestrator | 2025-06-22 19:59:50 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 19:59:50.567668 | orchestrator | 2025-06-22 19:59:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:53.582835 | orchestrator | 2025-06-22 19:59:53 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:53.582918 | orchestrator | 2025-06-22 19:59:53 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 19:59:53.582932 | orchestrator | 2025-06-22 19:59:53 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 19:59:53.582944 | orchestrator | 2025-06-22 19:59:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:56.607668 | orchestrator | 2025-06-22 19:59:56 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:56.607759 | orchestrator | 2025-06-22 19:59:56 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 19:59:56.608020 | orchestrator | 2025-06-22 19:59:56 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 19:59:56.608107 | orchestrator | 2025-06-22 19:59:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:59.640536 | orchestrator | 2025-06-22 19:59:59 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 19:59:59.640581 | orchestrator | 2025-06-22 19:59:59 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 19:59:59.640593 | orchestrator | 2025-06-22 19:59:59 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 19:59:59.640604 | orchestrator | 2025-06-22 19:59:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:02.673741 | orchestrator | 2025-06-22 20:00:02 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:02.674352 | orchestrator | 2025-06-22 20:00:02 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:02.675059 | orchestrator | 2025-06-22 20:00:02 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:02.675107 | orchestrator | 2025-06-22 20:00:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:05.720425 | orchestrator | 2025-06-22 20:00:05 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:05.720965 | orchestrator | 2025-06-22 20:00:05 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:05.721883 | orchestrator | 2025-06-22 20:00:05 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:05.721905 | orchestrator | 2025-06-22 20:00:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:08.749772 | orchestrator | 2025-06-22 20:00:08 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:08.750724 | orchestrator | 2025-06-22 20:00:08 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:08.750910 | orchestrator | 2025-06-22 20:00:08 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:08.750941 | orchestrator | 2025-06-22 20:00:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:11.782459 | orchestrator | 2025-06-22 20:00:11 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:11.782695 | orchestrator | 2025-06-22 20:00:11 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:11.783358 | orchestrator | 2025-06-22 20:00:11 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:11.783524 | orchestrator | 2025-06-22 20:00:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:14.820810 | orchestrator | 2025-06-22 20:00:14 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:14.822447 | orchestrator | 2025-06-22 20:00:14 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:14.823406 | orchestrator | 2025-06-22 20:00:14 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:14.823893 | orchestrator | 2025-06-22 20:00:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:17.862737 | orchestrator | 2025-06-22 20:00:17 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:17.862990 | orchestrator | 2025-06-22 20:00:17 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:17.864914 | orchestrator | 2025-06-22 20:00:17 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:17.865008 | orchestrator | 2025-06-22 20:00:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:20.913816 | orchestrator | 2025-06-22 20:00:20 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:20.915696 | orchestrator | 2025-06-22 20:00:20 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:20.916358 | orchestrator | 2025-06-22 20:00:20 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:20.916524 | orchestrator | 2025-06-22 20:00:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:23.952694 | orchestrator | 2025-06-22 20:00:23 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:23.954105 | orchestrator | 2025-06-22 20:00:23 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:23.955837 | orchestrator | 2025-06-22 20:00:23 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:23.955858 | orchestrator | 2025-06-22 20:00:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:27.009694 | orchestrator | 2025-06-22 20:00:27 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:27.011376 | orchestrator | 2025-06-22 20:00:27 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:27.013398 | orchestrator | 2025-06-22 20:00:27 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:27.014110 | orchestrator | 2025-06-22 20:00:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:30.064492 | orchestrator | 2025-06-22 20:00:30 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:30.065056 | orchestrator | 2025-06-22 20:00:30 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:30.065805 | orchestrator | 2025-06-22 20:00:30 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:30.066411 | orchestrator | 2025-06-22 20:00:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:33.106616 | orchestrator | 2025-06-22 20:00:33 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:33.108689 | orchestrator | 2025-06-22 20:00:33 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:33.110527 | orchestrator | 2025-06-22 20:00:33 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:33.111035 | orchestrator | 2025-06-22 20:00:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:36.155337 | orchestrator | 2025-06-22 20:00:36 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:36.156563 | orchestrator | 2025-06-22 20:00:36 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:36.158750 | orchestrator | 2025-06-22 20:00:36 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:36.158775 | orchestrator | 2025-06-22 20:00:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:39.197668 | orchestrator | 2025-06-22 20:00:39 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:39.199839 | orchestrator | 2025-06-22 20:00:39 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:39.201512 | orchestrator | 2025-06-22 20:00:39 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:39.201832 | orchestrator | 2025-06-22 20:00:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:42.255915 | orchestrator | 2025-06-22 20:00:42 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:42.258814 | orchestrator | 2025-06-22 20:00:42 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:42.260887 | orchestrator | 2025-06-22 20:00:42 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:42.261298 | orchestrator | 2025-06-22 20:00:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:45.310272 | orchestrator | 2025-06-22 20:00:45 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:45.311613 | orchestrator | 2025-06-22 20:00:45 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:45.313759 | orchestrator | 2025-06-22 20:00:45 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:45.313810 | orchestrator | 2025-06-22 20:00:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:48.351391 | orchestrator | 2025-06-22 20:00:48 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:48.353511 | orchestrator | 2025-06-22 20:00:48 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:48.355360 | orchestrator | 2025-06-22 20:00:48 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:48.355429 | orchestrator | 2025-06-22 20:00:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:51.398513 | orchestrator | 2025-06-22 20:00:51 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:51.400539 | orchestrator | 2025-06-22 20:00:51 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:51.402716 | orchestrator | 2025-06-22 20:00:51 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:51.402759 | orchestrator | 2025-06-22 20:00:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:54.456454 | orchestrator | 2025-06-22 20:00:54 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:54.458292 | orchestrator | 2025-06-22 20:00:54 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:54.460140 | orchestrator | 2025-06-22 20:00:54 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:54.460368 | orchestrator | 2025-06-22 20:00:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:57.515729 | orchestrator | 2025-06-22 20:00:57 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:00:57.517604 | orchestrator | 2025-06-22 20:00:57 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:00:57.519353 | orchestrator | 2025-06-22 20:00:57 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:00:57.519397 | orchestrator | 2025-06-22 20:00:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:00.558245 | orchestrator | 2025-06-22 20:01:00 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:00.558624 | orchestrator | 2025-06-22 20:01:00 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:00.559627 | orchestrator | 2025-06-22 20:01:00 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:00.559967 | orchestrator | 2025-06-22 20:01:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:03.612128 | orchestrator | 2025-06-22 20:01:03 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:03.612936 | orchestrator | 2025-06-22 20:01:03 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:03.614312 | orchestrator | 2025-06-22 20:01:03 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:03.614421 | orchestrator | 2025-06-22 20:01:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:06.658634 | orchestrator | 2025-06-22 20:01:06 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:06.659916 | orchestrator | 2025-06-22 20:01:06 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:06.662294 | orchestrator | 2025-06-22 20:01:06 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:06.662452 | orchestrator | 2025-06-22 20:01:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:09.705530 | orchestrator | 2025-06-22 20:01:09 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:09.710260 | orchestrator | 2025-06-22 20:01:09 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:09.710324 | orchestrator | 2025-06-22 20:01:09 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:09.710339 | orchestrator | 2025-06-22 20:01:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:12.753476 | orchestrator | 2025-06-22 20:01:12 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:12.755040 | orchestrator | 2025-06-22 20:01:12 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:12.756287 | orchestrator | 2025-06-22 20:01:12 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:12.756316 | orchestrator | 2025-06-22 20:01:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:15.806553 | orchestrator | 2025-06-22 20:01:15 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:15.807781 | orchestrator | 2025-06-22 20:01:15 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:15.809416 | orchestrator | 2025-06-22 20:01:15 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:15.809659 | orchestrator | 2025-06-22 20:01:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:18.859442 | orchestrator | 2025-06-22 20:01:18 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:18.860539 | orchestrator | 2025-06-22 20:01:18 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:18.861950 | orchestrator | 2025-06-22 20:01:18 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:18.862067 | orchestrator | 2025-06-22 20:01:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:21.917833 | orchestrator | 2025-06-22 20:01:21 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:21.919236 | orchestrator | 2025-06-22 20:01:21 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:21.924459 | orchestrator | 2025-06-22 20:01:21 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:21.924535 | orchestrator | 2025-06-22 20:01:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:24.968376 | orchestrator | 2025-06-22 20:01:24 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:24.970368 | orchestrator | 2025-06-22 20:01:24 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:24.972674 | orchestrator | 2025-06-22 20:01:24 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:24.972719 | orchestrator | 2025-06-22 20:01:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:28.025314 | orchestrator | 2025-06-22 20:01:28 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:28.026447 | orchestrator | 2025-06-22 20:01:28 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:28.027591 | orchestrator | 2025-06-22 20:01:28 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:28.027614 | orchestrator | 2025-06-22 20:01:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:31.076555 | orchestrator | 2025-06-22 20:01:31 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:31.077925 | orchestrator | 2025-06-22 20:01:31 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:31.080580 | orchestrator | 2025-06-22 20:01:31 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:31.080614 | orchestrator | 2025-06-22 20:01:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:34.123057 | orchestrator | 2025-06-22 20:01:34 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:34.126094 | orchestrator | 2025-06-22 20:01:34 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:34.127859 | orchestrator | 2025-06-22 20:01:34 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:34.127892 | orchestrator | 2025-06-22 20:01:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:37.170256 | orchestrator | 2025-06-22 20:01:37 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:37.172128 | orchestrator | 2025-06-22 20:01:37 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:37.174508 | orchestrator | 2025-06-22 20:01:37 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:37.174557 | orchestrator | 2025-06-22 20:01:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:40.226112 | orchestrator | 2025-06-22 20:01:40 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:40.227119 | orchestrator | 2025-06-22 20:01:40 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:40.228890 | orchestrator | 2025-06-22 20:01:40 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:40.229397 | orchestrator | 2025-06-22 20:01:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:43.268852 | orchestrator | 2025-06-22 20:01:43 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:43.270255 | orchestrator | 2025-06-22 20:01:43 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:43.271095 | orchestrator | 2025-06-22 20:01:43 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:43.271129 | orchestrator | 2025-06-22 20:01:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:46.321303 | orchestrator | 2025-06-22 20:01:46 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:46.323983 | orchestrator | 2025-06-22 20:01:46 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:46.326859 | orchestrator | 2025-06-22 20:01:46 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:46.326920 | orchestrator | 2025-06-22 20:01:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:49.375515 | orchestrator | 2025-06-22 20:01:49 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:49.378012 | orchestrator | 2025-06-22 20:01:49 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:49.380982 | orchestrator | 2025-06-22 20:01:49 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:49.381004 | orchestrator | 2025-06-22 20:01:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:52.433181 | orchestrator | 2025-06-22 20:01:52 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:52.434576 | orchestrator | 2025-06-22 20:01:52 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:52.436706 | orchestrator | 2025-06-22 20:01:52 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:52.436852 | orchestrator | 2025-06-22 20:01:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:55.482943 | orchestrator | 2025-06-22 20:01:55 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:55.483707 | orchestrator | 2025-06-22 20:01:55 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:55.485956 | orchestrator | 2025-06-22 20:01:55 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:55.485981 | orchestrator | 2025-06-22 20:01:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:58.535284 | orchestrator | 2025-06-22 20:01:58 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:01:58.536981 | orchestrator | 2025-06-22 20:01:58 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:01:58.539110 | orchestrator | 2025-06-22 20:01:58 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:01:58.539195 | orchestrator | 2025-06-22 20:01:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:01.598362 | orchestrator | 2025-06-22 20:02:01 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:02:01.599608 | orchestrator | 2025-06-22 20:02:01 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:01.601451 | orchestrator | 2025-06-22 20:02:01 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:01.601899 | orchestrator | 2025-06-22 20:02:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:04.646756 | orchestrator | 2025-06-22 20:02:04 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:02:04.647786 | orchestrator | 2025-06-22 20:02:04 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:04.649251 | orchestrator | 2025-06-22 20:02:04 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:04.649284 | orchestrator | 2025-06-22 20:02:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:07.695626 | orchestrator | 2025-06-22 20:02:07 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:02:07.696824 | orchestrator | 2025-06-22 20:02:07 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:07.697638 | orchestrator | 2025-06-22 20:02:07 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:07.697683 | orchestrator | 2025-06-22 20:02:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:10.742769 | orchestrator | 2025-06-22 20:02:10 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:02:10.743799 | orchestrator | 2025-06-22 20:02:10 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:10.745033 | orchestrator | 2025-06-22 20:02:10 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:10.745238 | orchestrator | 2025-06-22 20:02:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:13.791164 | orchestrator | 2025-06-22 20:02:13 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:02:13.794617 | orchestrator | 2025-06-22 20:02:13 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:13.797164 | orchestrator | 2025-06-22 20:02:13 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:13.797855 | orchestrator | 2025-06-22 20:02:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:16.846582 | orchestrator | 2025-06-22 20:02:16 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:02:16.847463 | orchestrator | 2025-06-22 20:02:16 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:16.848782 | orchestrator | 2025-06-22 20:02:16 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:16.848906 | orchestrator | 2025-06-22 20:02:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:19.893845 | orchestrator | 2025-06-22 20:02:19 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:02:19.895201 | orchestrator | 2025-06-22 20:02:19 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:19.896799 | orchestrator | 2025-06-22 20:02:19 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:19.897016 | orchestrator | 2025-06-22 20:02:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:22.941942 | orchestrator | 2025-06-22 20:02:22 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state STARTED 2025-06-22 20:02:22.943525 | orchestrator | 2025-06-22 20:02:22 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:22.945540 | orchestrator | 2025-06-22 20:02:22 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:22.945653 | orchestrator | 2025-06-22 20:02:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:26.005428 | orchestrator | 2025-06-22 20:02:26.005553 | orchestrator | 2025-06-22 20:02:26.005577 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-22 20:02:26.005598 | orchestrator | 2025-06-22 20:02:26.005617 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-22 20:02:26.005636 | orchestrator | Sunday 22 June 2025 19:50:57 +0000 (0:00:00.743) 0:00:00.743 *********** 2025-06-22 20:02:26.005656 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.005675 | orchestrator | 2025-06-22 20:02:26.005693 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-22 20:02:26.006180 | orchestrator | Sunday 22 June 2025 19:50:58 +0000 (0:00:01.074) 0:00:01.817 *********** 2025-06-22 20:02:26.006208 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.006230 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.006249 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.006269 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.006287 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.006305 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.006324 | orchestrator | 2025-06-22 20:02:26.006343 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-22 20:02:26.006361 | orchestrator | Sunday 22 June 2025 19:51:00 +0000 (0:00:01.672) 0:00:03.490 *********** 2025-06-22 20:02:26.006379 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.006396 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.006414 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.006431 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.006446 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.006463 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.006482 | orchestrator | 2025-06-22 20:02:26.006498 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-22 20:02:26.006515 | orchestrator | Sunday 22 June 2025 19:51:00 +0000 (0:00:00.744) 0:00:04.234 *********** 2025-06-22 20:02:26.006532 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.006548 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.006564 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.006581 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.006598 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.006616 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.006635 | orchestrator | 2025-06-22 20:02:26.006655 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-22 20:02:26.006676 | orchestrator | Sunday 22 June 2025 19:51:02 +0000 (0:00:01.202) 0:00:05.436 *********** 2025-06-22 20:02:26.006728 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.006746 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.006938 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.007025 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.007044 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.007062 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.007079 | orchestrator | 2025-06-22 20:02:26.007097 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-22 20:02:26.007115 | orchestrator | Sunday 22 June 2025 19:51:03 +0000 (0:00:00.904) 0:00:06.341 *********** 2025-06-22 20:02:26.007163 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.007183 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.007203 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.007222 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.007242 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.007262 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.007281 | orchestrator | 2025-06-22 20:02:26.007301 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-22 20:02:26.007320 | orchestrator | Sunday 22 June 2025 19:51:03 +0000 (0:00:00.784) 0:00:07.125 *********** 2025-06-22 20:02:26.007339 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.007358 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.007376 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.007395 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.007414 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.007433 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.007451 | orchestrator | 2025-06-22 20:02:26.007469 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-22 20:02:26.007488 | orchestrator | Sunday 22 June 2025 19:51:04 +0000 (0:00:00.822) 0:00:07.948 *********** 2025-06-22 20:02:26.007507 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.007527 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.007545 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.007563 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.007582 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.007599 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.007617 | orchestrator | 2025-06-22 20:02:26.007636 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-22 20:02:26.007654 | orchestrator | Sunday 22 June 2025 19:51:05 +0000 (0:00:00.690) 0:00:08.639 *********** 2025-06-22 20:02:26.007672 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.007691 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.007709 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.007745 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.007764 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.007782 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.007842 | orchestrator | 2025-06-22 20:02:26.007862 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-22 20:02:26.007916 | orchestrator | Sunday 22 June 2025 19:51:06 +0000 (0:00:00.958) 0:00:09.597 *********** 2025-06-22 20:02:26.007936 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:02:26.007954 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:02:26.007975 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:02:26.007994 | orchestrator | 2025-06-22 20:02:26.008103 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-22 20:02:26.008124 | orchestrator | Sunday 22 June 2025 19:51:07 +0000 (0:00:00.781) 0:00:10.379 *********** 2025-06-22 20:02:26.008354 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.008376 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.008396 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.008513 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.008535 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.008576 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.008596 | orchestrator | 2025-06-22 20:02:26.008642 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-22 20:02:26.008662 | orchestrator | Sunday 22 June 2025 19:51:08 +0000 (0:00:01.201) 0:00:11.580 *********** 2025-06-22 20:02:26.008682 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:02:26.008699 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:02:26.008714 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:02:26.008725 | orchestrator | 2025-06-22 20:02:26.008741 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-22 20:02:26.008756 | orchestrator | Sunday 22 June 2025 19:51:11 +0000 (0:00:02.765) 0:00:14.346 *********** 2025-06-22 20:02:26.008774 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 20:02:26.008789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 20:02:26.008807 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 20:02:26.008824 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.008921 | orchestrator | 2025-06-22 20:02:26.008939 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-22 20:02:26.008954 | orchestrator | Sunday 22 June 2025 19:51:12 +0000 (0:00:01.028) 0:00:15.375 *********** 2025-06-22 20:02:26.008973 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.008992 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.009007 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.009023 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.009038 | orchestrator | 2025-06-22 20:02:26.009055 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-22 20:02:26.009069 | orchestrator | Sunday 22 June 2025 19:51:12 +0000 (0:00:00.802) 0:00:16.178 *********** 2025-06-22 20:02:26.009089 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.009109 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.009326 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.009372 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.009405 | orchestrator | 2025-06-22 20:02:26.009518 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-22 20:02:26.009538 | orchestrator | Sunday 22 June 2025 19:51:13 +0000 (0:00:00.453) 0:00:16.631 *********** 2025-06-22 20:02:26.009578 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-22 19:51:08.886530', 'end': '2025-06-22 19:51:09.173338', 'delta': '0:00:00.286808', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.009601 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-22 19:51:09.931257', 'end': '2025-06-22 19:51:10.203100', 'delta': '0:00:00.271843', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.009618 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-22 19:51:10.726791', 'end': '2025-06-22 19:51:10.961301', 'delta': '0:00:00.234510', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.009635 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.009653 | orchestrator | 2025-06-22 20:02:26.009669 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-22 20:02:26.009685 | orchestrator | Sunday 22 June 2025 19:51:13 +0000 (0:00:00.296) 0:00:16.928 *********** 2025-06-22 20:02:26.009736 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.009757 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.009774 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.009847 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.009865 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.009880 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.009895 | orchestrator | 2025-06-22 20:02:26.009912 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-22 20:02:26.009927 | orchestrator | Sunday 22 June 2025 19:51:15 +0000 (0:00:01.447) 0:00:18.376 *********** 2025-06-22 20:02:26.009980 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:02:26.010001 | orchestrator | 2025-06-22 20:02:26.010084 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-22 20:02:26.010107 | orchestrator | Sunday 22 June 2025 19:51:15 +0000 (0:00:00.828) 0:00:19.205 *********** 2025-06-22 20:02:26.010123 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.010236 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.010253 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.010479 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.010521 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.010539 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.010555 | orchestrator | 2025-06-22 20:02:26.010570 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-22 20:02:26.010585 | orchestrator | Sunday 22 June 2025 19:51:17 +0000 (0:00:01.419) 0:00:20.624 *********** 2025-06-22 20:02:26.010599 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.010728 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.010750 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.010768 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.010785 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.010803 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.010821 | orchestrator | 2025-06-22 20:02:26.010839 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 20:02:26.010857 | orchestrator | Sunday 22 June 2025 19:51:18 +0000 (0:00:01.597) 0:00:22.221 *********** 2025-06-22 20:02:26.010874 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.010892 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.010909 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.010938 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.010954 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.010968 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.010983 | orchestrator | 2025-06-22 20:02:26.010998 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-22 20:02:26.011013 | orchestrator | Sunday 22 June 2025 19:51:19 +0000 (0:00:00.996) 0:00:23.218 *********** 2025-06-22 20:02:26.011029 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.011042 | orchestrator | 2025-06-22 20:02:26.011058 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-22 20:02:26.011073 | orchestrator | Sunday 22 June 2025 19:51:20 +0000 (0:00:00.169) 0:00:23.388 *********** 2025-06-22 20:02:26.011087 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.011102 | orchestrator | 2025-06-22 20:02:26.011119 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 20:02:26.011163 | orchestrator | Sunday 22 June 2025 19:51:20 +0000 (0:00:00.358) 0:00:23.746 *********** 2025-06-22 20:02:26.011181 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.011197 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.011213 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.011228 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.011245 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.011285 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.011299 | orchestrator | 2025-06-22 20:02:26.011335 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-22 20:02:26.011350 | orchestrator | Sunday 22 June 2025 19:51:21 +0000 (0:00:00.994) 0:00:24.741 *********** 2025-06-22 20:02:26.011365 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.011379 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.011394 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.011411 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.011427 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.011443 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.011459 | orchestrator | 2025-06-22 20:02:26.011474 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-22 20:02:26.011491 | orchestrator | Sunday 22 June 2025 19:51:22 +0000 (0:00:01.147) 0:00:25.888 *********** 2025-06-22 20:02:26.011507 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.011523 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.011537 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.011547 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.011556 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.011566 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.011575 | orchestrator | 2025-06-22 20:02:26.011585 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-22 20:02:26.011613 | orchestrator | Sunday 22 June 2025 19:51:23 +0000 (0:00:00.829) 0:00:26.717 *********** 2025-06-22 20:02:26.011629 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.011644 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.011658 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.011672 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.011687 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.011701 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.011716 | orchestrator | 2025-06-22 20:02:26.011730 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-22 20:02:26.011744 | orchestrator | Sunday 22 June 2025 19:51:24 +0000 (0:00:00.729) 0:00:27.446 *********** 2025-06-22 20:02:26.011760 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.011775 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.011791 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.011806 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.011822 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.011837 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.011852 | orchestrator | 2025-06-22 20:02:26.011867 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-22 20:02:26.011883 | orchestrator | Sunday 22 June 2025 19:51:24 +0000 (0:00:00.626) 0:00:28.073 *********** 2025-06-22 20:02:26.011898 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.011913 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.011929 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.011944 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.011960 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.011975 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.011990 | orchestrator | 2025-06-22 20:02:26.012005 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-22 20:02:26.012018 | orchestrator | Sunday 22 June 2025 19:51:25 +0000 (0:00:01.011) 0:00:29.084 *********** 2025-06-22 20:02:26.012029 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.012041 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.012055 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.012067 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.012078 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.012090 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.012101 | orchestrator | 2025-06-22 20:02:26.012114 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-22 20:02:26.012161 | orchestrator | Sunday 22 June 2025 19:51:26 +0000 (0:00:00.765) 0:00:29.850 *********** 2025-06-22 20:02:26.012179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--988500a7--3c26--5f89--b599--1c63900dc902-osd--block--988500a7--3c26--5f89--b599--1c63900dc902', 'dm-uuid-LVM-ZZ2TtSjbnMojwAj3mtQDARFQeMNsdJxjTzVJdkK9yPtVvNy9jvy7424QwNw0aPi5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012208 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f1623286--8630--50a6--960f--aa7fe8c22ac9-osd--block--f1623286--8630--50a6--960f--aa7fe8c22ac9', 'dm-uuid-LVM-pvws3no6YnWFb5jLLz15f4x3pF8jIM7ceJ2LSdtQSj3b4EnkSIQUHqz557SL12cs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012310 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--809c9636--3d83--5d3b--8a98--356a4387ae79-osd--block--809c9636--3d83--5d3b--8a98--356a4387ae79', 'dm-uuid-LVM-HNvb7P5UdpqK3mwinBCUfOavIhvtapZSo26fe1u9TpjVyD7pwEfLe1urhCckwvSQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012358 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012373 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e-osd--block--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e', 'dm-uuid-LVM-cs6yb6i6fq5eydSrrPQKaabKKtL0P8Uw5rl2V5e79OcpIQ83rtE6ZFduJNqoKS8E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012418 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012435 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.012453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--988500a7--3c26--5f89--b599--1c63900dc902-osd--block--988500a7--3c26--5f89--b599--1c63900dc902'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aHnAQW-CY2S-QPcG-urub-1Uq7-6Nhu-1ETFxy', 'scsi-0QEMU_QEMU_HARDDISK_d397d31c-b886-4607-b3cb-2d758622dade', 'scsi-SQEMU_QEMU_HARDDISK_d397d31c-b886-4607-b3cb-2d758622dade'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.012505 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f1623286--8630--50a6--960f--aa7fe8c22ac9-osd--block--f1623286--8630--50a6--960f--aa7fe8c22ac9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FsWRfU-wdE9-1nsq-H9R0-fLa0-WHjk-2DzLs0', 'scsi-0QEMU_QEMU_HARDDISK_66d4c0b6-de40-44d2-a991-376660387b3d', 'scsi-SQEMU_QEMU_HARDDISK_66d4c0b6-de40-44d2-a991-376660387b3d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.012536 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f438dec5-52e6-4e07-b468-2b34fd5e0bbc', 'scsi-SQEMU_QEMU_HARDDISK_f438dec5-52e6-4e07-b468-2b34fd5e0bbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.012552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.012579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012648 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012662 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.012677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b2f14396--315c--50f9--a6a7--8817318b41c3-osd--block--b2f14396--315c--50f9--a6a7--8817318b41c3', 'dm-uuid-LVM-qAyMel2csi5wA2oHi0eSYgu6TRAIHRBr0CRB2s2crF0E3DsICXFrbq2cprESQylt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012697 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--809c9636--3d83--5d3b--8a98--356a4387ae79-osd--block--809c9636--3d83--5d3b--8a98--356a4387ae79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Y5sRIk-LsA5-itBW-li2o-MNUJ-ffYN-BNgrYU', 'scsi-0QEMU_QEMU_HARDDISK_9d381e45-09fd-4a20-ab1c-6f33bb7ad47a', 'scsi-SQEMU_QEMU_HARDDISK_9d381e45-09fd-4a20-ab1c-6f33bb7ad47a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.012727 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e-osd--block--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-S3aXuW-Blj8-bafQ-BtEV-2zgi-BrrR-g7gMaT', 'scsi-0QEMU_QEMU_HARDDISK_ca04149b-3774-4fe5-a4a8-e7007e740a3b', 'scsi-SQEMU_QEMU_HARDDISK_c2025-06-22 20:02:25 | INFO  | Task d1f02d22-93a7-4e15-b82f-d4ee991a3d5d is in state SUCCESS 2025-06-22 20:02:26.012743 | orchestrator | a04149b-3774-4fe5-a4a8-e7007e740a3b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.012757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10758f5a-a518-4894-b68c-79c541e050d1', 'scsi-SQEMU_QEMU_HARDDISK_10758f5a-a518-4894-b68c-79c541e050d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.012772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--60bbbdec--af53--55ad--b293--31f676104815-osd--block--60bbbdec--af53--55ad--b293--31f676104815', 'dm-uuid-LVM-a3vy1AKtadDZuu1qWxIr3lZp7NOsXyj8EpgcK9hcB9JuQNvZo9XvtcRp6hzkTg97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012786 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.012799 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012839 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012854 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.012867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012890 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012977 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.012992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013005 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013049 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.013076 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b2f14396--315c--50f9--a6a7--8817318b41c3-osd--block--b2f14396--315c--50f9--a6a7--8817318b41c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YrvepJ-gtxc-rI1A-6L49-iRD4-STYx-7gD10V', 'scsi-0QEMU_QEMU_HARDDISK_986f77d9-7eeb-491e-bdbe-4c9e8ad066d2', 'scsi-SQEMU_QEMU_HARDDISK_986f77d9-7eeb-491e-bdbe-4c9e8ad066d2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.013091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--60bbbdec--af53--55ad--b293--31f676104815-osd--block--60bbbdec--af53--55ad--b293--31f676104815'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YeYW6l-qb1k-qFKK-wbuM-lxrP-uce0-4eVM95', 'scsi-0QEMU_QEMU_HARDDISK_f12434e6-788f-4ffb-a434-d641146d84ae', 'scsi-SQEMU_QEMU_HARDDISK_f12434e6-788f-4ffb-a434-d641146d84ae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.013124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3712533-4ba6-4a13-8d22-1afd9c8ce6f2', 'scsi-SQEMU_QEMU_HARDDISK_b3712533-4ba6-4a13-8d22-1afd9c8ce6f2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.013169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.013250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105', 'scsi-SQEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part1', 'scsi-SQEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part14', 'scsi-SQEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part15', 'scsi-SQEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part16', 'scsi-SQEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.013354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.013371 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.013386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634', 'scsi-SQEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part1', 'scsi-SQEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part14', 'scsi-SQEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part15', 'scsi-SQEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part16', 'scsi-SQEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.013555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.013580 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.013594 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.013607 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.013620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:02:26.013774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4', 'scsi-SQEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part1', 'scsi-SQEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part14', 'scsi-SQEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part15', 'scsi-SQEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part16', 'scsi-SQEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.013795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:02:26.013808 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.013820 | orchestrator | 2025-06-22 20:02:26.013832 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-22 20:02:26.013843 | orchestrator | Sunday 22 June 2025 19:51:28 +0000 (0:00:01.899) 0:00:31.749 *********** 2025-06-22 20:02:26.013854 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--988500a7--3c26--5f89--b599--1c63900dc902-osd--block--988500a7--3c26--5f89--b599--1c63900dc902', 'dm-uuid-LVM-ZZ2TtSjbnMojwAj3mtQDARFQeMNsdJxjTzVJdkK9yPtVvNy9jvy7424QwNw0aPi5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.013867 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f1623286--8630--50a6--960f--aa7fe8c22ac9-osd--block--f1623286--8630--50a6--960f--aa7fe8c22ac9', 'dm-uuid-LVM-pvws3no6YnWFb5jLLz15f4x3pF8jIM7ceJ2LSdtQSj3b4EnkSIQUHqz557SL12cs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015212 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015222 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015229 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015236 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015243 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015322 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015333 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015346 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015355 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--988500a7--3c26--5f89--b599--1c63900dc902-osd--block--988500a7--3c26--5f89--b599--1c63900dc902'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aHnAQW-CY2S-QPcG-urub-1Uq7-6Nhu-1ETFxy', 'scsi-0QEMU_QEMU_HARDDISK_d397d31c-b886-4607-b3cb-2d758622dade', 'scsi-SQEMU_QEMU_HARDDISK_d397d31c-b886-4607-b3cb-2d758622dade'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015403 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f1623286--8630--50a6--960f--aa7fe8c22ac9-osd--block--f1623286--8630--50a6--960f--aa7fe8c22ac9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FsWRfU-wdE9-1nsq-H9R0-fLa0-WHjk-2DzLs0', 'scsi-0QEMU_QEMU_HARDDISK_66d4c0b6-de40-44d2-a991-376660387b3d', 'scsi-SQEMU_QEMU_HARDDISK_66d4c0b6-de40-44d2-a991-376660387b3d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015419 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--809c9636--3d83--5d3b--8a98--356a4387ae79-osd--block--809c9636--3d83--5d3b--8a98--356a4387ae79', 'dm-uuid-LVM-HNvb7P5UdpqK3mwinBCUfOavIhvtapZSo26fe1u9TpjVyD7pwEfLe1urhCckwvSQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015432 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f438dec5-52e6-4e07-b468-2b34fd5e0bbc', 'scsi-SQEMU_QEMU_HARDDISK_f438dec5-52e6-4e07-b468-2b34fd5e0bbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015443 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e-osd--block--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e', 'dm-uuid-LVM-cs6yb6i6fq5eydSrrPQKaabKKtL0P8Uw5rl2V5e79OcpIQ83rtE6ZFduJNqoKS8E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015462 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015522 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015536 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015553 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015565 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015623 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b2f14396--315c--50f9--a6a7--8817318b41c3-osd--block--b2f14396--315c--50f9--a6a7--8817318b41c3', 'dm-uuid-LVM-qAyMel2csi5wA2oHi0eSYgu6TRAIHRBr0CRB2s2crF0E3DsICXFrbq2cprESQylt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015645 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015733 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015749 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--60bbbdec--af53--55ad--b293--31f676104815-osd--block--60bbbdec--af53--55ad--b293--31f676104815', 'dm-uuid-LVM-a3vy1AKtadDZuu1qWxIr3lZp7NOsXyj8EpgcK9hcB9JuQNvZo9XvtcRp6hzkTg97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015763 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015774 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015937 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015962 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.015974 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.015987 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--809c9636--3d83--5d3b--8a98--356a4387ae79-osd--block--809c9636--3d83--5d3b--8a98--356a4387ae79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Y5sRIk-LsA5-itBW-li2o-MNUJ-ffYN-BNgrYU', 'scsi-0QEMU_QEMU_HARDDISK_9d381e45-09fd-4a20-ab1c-6f33bb7ad47a', 'scsi-SQEMU_QEMU_HARDDISK_9d381e45-09fd-4a20-ab1c-6f33bb7ad47a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016014 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e-osd--block--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-S3aXuW-Blj8-bafQ-BtEV-2zgi-BrrR-g7gMaT', 'scsi-0QEMU_QEMU_HARDDISK_ca04149b-3774-4fe5-a4a8-e7007e740a3b', 'scsi-SQEMU_QEMU_HARDDISK_ca04149b-3774-4fe5-a4a8-e7007e740a3b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016115 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016150 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10758f5a-a518-4894-b68c-79c541e050d1', 'scsi-SQEMU_QEMU_HARDDISK_10758f5a-a518-4894-b68c-79c541e050d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016169 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016181 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016201 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016213 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016305 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016321 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016339 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016349 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016360 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016377 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016387 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016480 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016497 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016515 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105', 'scsi-SQEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part1', 'scsi-SQEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part14', 'scsi-SQEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part15', 'scsi-SQEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part16', 'scsi-SQEMU_QEMU_HARDDISK_215d9bf5-4869-41a3-a63c-a129ce87d105-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016535 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016627 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016650 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016674 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b2f14396--315c--50f9--a6a7--8817318b41c3-osd--block--b2f14396--315c--50f9--a6a7--8817318b41c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YrvepJ-gtxc-rI1A-6L49-iRD4-STYx-7gD10V', 'scsi-0QEMU_QEMU_HARDDISK_986f77d9-7eeb-491e-bdbe-4c9e8ad066d2', 'scsi-SQEMU_QEMU_HARDDISK_986f77d9-7eeb-491e-bdbe-4c9e8ad066d2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016685 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.016763 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--60bbbdec--af53--55ad--b293--31f676104815-osd--block--60bbbdec--af53--55ad--b293--31f676104815'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YeYW6l-qb1k-qFKK-wbuM-lxrP-uce0-4eVM95', 'scsi-0QEMU_QEMU_HARDDISK_f12434e6-788f-4ffb-a434-d641146d84ae', 'scsi-SQEMU_QEMU_HARDDISK_f12434e6-788f-4ffb-a434-d641146d84ae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016799 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3712533-4ba6-4a13-8d22-1afd9c8ce6f2', 'scsi-SQEMU_QEMU_HARDDISK_b3712533-4ba6-4a13-8d22-1afd9c8ce6f2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016812 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016834 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016847 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016859 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016951 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016975 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.016987 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017008 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017020 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017032 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.017259 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634', 'scsi-SQEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part1', 'scsi-SQEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part14', 'scsi-SQEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part15', 'scsi-SQEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part16', 'scsi-SQEMU_QEMU_HARDDISK_0983c77d-7f56-479f-b361-5b63b2990634-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017315 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017331 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.017339 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.017347 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017355 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017362 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017462 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017474 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017489 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017503 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017510 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017570 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4', 'scsi-SQEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part1', 'scsi-SQEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part14', 'scsi-SQEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part15', 'scsi-SQEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part16', 'scsi-SQEMU_QEMU_HARDDISK_3580f05e-2d50-4314-b9ba-0e0b6e7b4cf4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017586 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-51-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:02:26.017594 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.017601 | orchestrator | 2025-06-22 20:02:26.017608 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-22 20:02:26.017616 | orchestrator | Sunday 22 June 2025 19:51:30 +0000 (0:00:01.628) 0:00:33.378 *********** 2025-06-22 20:02:26.017623 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.017630 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.017636 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.017643 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.017650 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.017665 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.017672 | orchestrator | 2025-06-22 20:02:26.017679 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-22 20:02:26.017686 | orchestrator | Sunday 22 June 2025 19:51:32 +0000 (0:00:01.975) 0:00:35.354 *********** 2025-06-22 20:02:26.017693 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.017699 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.017706 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.017712 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.017719 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.017726 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.017732 | orchestrator | 2025-06-22 20:02:26.017739 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 20:02:26.017746 | orchestrator | Sunday 22 June 2025 19:51:32 +0000 (0:00:00.543) 0:00:35.897 *********** 2025-06-22 20:02:26.017753 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.017759 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.017766 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.017772 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.017779 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.017786 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.017792 | orchestrator | 2025-06-22 20:02:26.017799 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 20:02:26.017806 | orchestrator | Sunday 22 June 2025 19:51:33 +0000 (0:00:00.760) 0:00:36.658 *********** 2025-06-22 20:02:26.017812 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.017819 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.017835 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.017843 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.017850 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.017857 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.017864 | orchestrator | 2025-06-22 20:02:26.017871 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 20:02:26.017878 | orchestrator | Sunday 22 June 2025 19:51:34 +0000 (0:00:00.835) 0:00:37.493 *********** 2025-06-22 20:02:26.017885 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.017892 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.017906 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.017914 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.017920 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.017932 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.017939 | orchestrator | 2025-06-22 20:02:26.017946 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 20:02:26.017953 | orchestrator | Sunday 22 June 2025 19:51:35 +0000 (0:00:01.056) 0:00:38.550 *********** 2025-06-22 20:02:26.017960 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.018013 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.018063 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.018071 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.018078 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.018084 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.018091 | orchestrator | 2025-06-22 20:02:26.018098 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-22 20:02:26.018106 | orchestrator | Sunday 22 June 2025 19:51:36 +0000 (0:00:00.760) 0:00:39.311 *********** 2025-06-22 20:02:26.018113 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-22 20:02:26.018120 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-22 20:02:26.018145 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-22 20:02:26.018157 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-22 20:02:26.018164 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 20:02:26.018171 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-22 20:02:26.018177 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-22 20:02:26.018184 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-22 20:02:26.018190 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-22 20:02:26.018197 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-22 20:02:26.018203 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-22 20:02:26.018210 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-22 20:02:26.018216 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-22 20:02:26.018223 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-22 20:02:26.018234 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-22 20:02:26.018241 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-22 20:02:26.018247 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-22 20:02:26.018254 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-22 20:02:26.018260 | orchestrator | 2025-06-22 20:02:26.018267 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-22 20:02:26.018274 | orchestrator | Sunday 22 June 2025 19:51:40 +0000 (0:00:04.240) 0:00:43.552 *********** 2025-06-22 20:02:26.018280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 20:02:26.018287 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 20:02:26.018294 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 20:02:26.018300 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.018307 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-22 20:02:26.018314 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-22 20:02:26.018320 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-22 20:02:26.018327 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.018333 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-22 20:02:26.018340 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-22 20:02:26.018347 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-22 20:02:26.018353 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.018360 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 20:02:26.018380 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 20:02:26.018387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 20:02:26.018399 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.018406 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-22 20:02:26.018412 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-22 20:02:26.018419 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-22 20:02:26.018425 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.018432 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-22 20:02:26.018438 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-22 20:02:26.018445 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-22 20:02:26.018452 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.018458 | orchestrator | 2025-06-22 20:02:26.018465 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-22 20:02:26.018472 | orchestrator | Sunday 22 June 2025 19:51:41 +0000 (0:00:00.703) 0:00:44.256 *********** 2025-06-22 20:02:26.018478 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.018485 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.018491 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.018499 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.018505 | orchestrator | 2025-06-22 20:02:26.018512 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-22 20:02:26.018520 | orchestrator | Sunday 22 June 2025 19:51:41 +0000 (0:00:00.912) 0:00:45.168 *********** 2025-06-22 20:02:26.018526 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.018533 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.018539 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.018546 | orchestrator | 2025-06-22 20:02:26.018553 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-22 20:02:26.018559 | orchestrator | Sunday 22 June 2025 19:51:42 +0000 (0:00:00.281) 0:00:45.450 *********** 2025-06-22 20:02:26.018566 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.018573 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.018579 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.018586 | orchestrator | 2025-06-22 20:02:26.018592 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-22 20:02:26.018624 | orchestrator | Sunday 22 June 2025 19:51:42 +0000 (0:00:00.506) 0:00:45.957 *********** 2025-06-22 20:02:26.018632 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.018639 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.018647 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.018655 | orchestrator | 2025-06-22 20:02:26.018663 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-22 20:02:26.018671 | orchestrator | Sunday 22 June 2025 19:51:42 +0000 (0:00:00.287) 0:00:46.244 *********** 2025-06-22 20:02:26.018679 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.018687 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.018695 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.018703 | orchestrator | 2025-06-22 20:02:26.018711 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-22 20:02:26.018719 | orchestrator | Sunday 22 June 2025 19:51:43 +0000 (0:00:00.370) 0:00:46.614 *********** 2025-06-22 20:02:26.018727 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.018734 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.018742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.018750 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.018757 | orchestrator | 2025-06-22 20:02:26.018765 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-22 20:02:26.018773 | orchestrator | Sunday 22 June 2025 19:51:43 +0000 (0:00:00.359) 0:00:46.974 *********** 2025-06-22 20:02:26.018781 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.018794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.018802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.018813 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.018821 | orchestrator | 2025-06-22 20:02:26.018829 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-22 20:02:26.018837 | orchestrator | Sunday 22 June 2025 19:51:44 +0000 (0:00:00.349) 0:00:47.323 *********** 2025-06-22 20:02:26.018845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.018852 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.018860 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.018868 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.018876 | orchestrator | 2025-06-22 20:02:26.018884 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-22 20:02:26.018891 | orchestrator | Sunday 22 June 2025 19:51:44 +0000 (0:00:00.357) 0:00:47.680 *********** 2025-06-22 20:02:26.018899 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.018907 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.018915 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.018922 | orchestrator | 2025-06-22 20:02:26.018930 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-22 20:02:26.018938 | orchestrator | Sunday 22 June 2025 19:51:45 +0000 (0:00:00.661) 0:00:48.342 *********** 2025-06-22 20:02:26.018946 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 20:02:26.018954 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-22 20:02:26.018961 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-22 20:02:26.018968 | orchestrator | 2025-06-22 20:02:26.018975 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-22 20:02:26.018981 | orchestrator | Sunday 22 June 2025 19:51:45 +0000 (0:00:00.717) 0:00:49.060 *********** 2025-06-22 20:02:26.018988 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:02:26.018996 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:02:26.019003 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:02:26.019009 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 20:02:26.019016 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 20:02:26.019023 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 20:02:26.019030 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 20:02:26.019036 | orchestrator | 2025-06-22 20:02:26.019043 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-22 20:02:26.019050 | orchestrator | Sunday 22 June 2025 19:51:46 +0000 (0:00:00.827) 0:00:49.888 *********** 2025-06-22 20:02:26.019056 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:02:26.019063 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:02:26.019070 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:02:26.019076 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 20:02:26.019083 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 20:02:26.019090 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 20:02:26.019097 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 20:02:26.019103 | orchestrator | 2025-06-22 20:02:26.019110 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:02:26.019117 | orchestrator | Sunday 22 June 2025 19:51:48 +0000 (0:00:02.107) 0:00:51.995 *********** 2025-06-22 20:02:26.019180 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.019191 | orchestrator | 2025-06-22 20:02:26.019198 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:02:26.019227 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:01.210) 0:00:53.206 *********** 2025-06-22 20:02:26.019235 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.019242 | orchestrator | 2025-06-22 20:02:26.019249 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:02:26.019256 | orchestrator | Sunday 22 June 2025 19:51:51 +0000 (0:00:01.526) 0:00:54.733 *********** 2025-06-22 20:02:26.019262 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.019269 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.019276 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.019282 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.019289 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.019296 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.019303 | orchestrator | 2025-06-22 20:02:26.019309 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:02:26.019316 | orchestrator | Sunday 22 June 2025 19:51:52 +0000 (0:00:01.316) 0:00:56.049 *********** 2025-06-22 20:02:26.019323 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.019330 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.019336 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.019343 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.019350 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.019356 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.019363 | orchestrator | 2025-06-22 20:02:26.019370 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:02:26.019376 | orchestrator | Sunday 22 June 2025 19:51:53 +0000 (0:00:00.795) 0:00:56.845 *********** 2025-06-22 20:02:26.019387 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.019394 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.019400 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.019407 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.019414 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.019420 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.019427 | orchestrator | 2025-06-22 20:02:26.019434 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:02:26.019441 | orchestrator | Sunday 22 June 2025 19:51:54 +0000 (0:00:01.003) 0:00:57.849 *********** 2025-06-22 20:02:26.019447 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.019454 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.019460 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.019467 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.019474 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.019480 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.019487 | orchestrator | 2025-06-22 20:02:26.019493 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:02:26.019500 | orchestrator | Sunday 22 June 2025 19:51:55 +0000 (0:00:00.860) 0:00:58.710 *********** 2025-06-22 20:02:26.019507 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.019514 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.019520 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.019527 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.019533 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.019540 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.019547 | orchestrator | 2025-06-22 20:02:26.019553 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:02:26.019560 | orchestrator | Sunday 22 June 2025 19:51:56 +0000 (0:00:01.185) 0:00:59.895 *********** 2025-06-22 20:02:26.019572 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.019579 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.019585 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.019591 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.019598 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.019604 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.019610 | orchestrator | 2025-06-22 20:02:26.019616 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:02:26.019622 | orchestrator | Sunday 22 June 2025 19:51:57 +0000 (0:00:00.492) 0:01:00.388 *********** 2025-06-22 20:02:26.019629 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.019635 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.019641 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.019647 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.019653 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.019659 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.019666 | orchestrator | 2025-06-22 20:02:26.019672 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:02:26.019678 | orchestrator | Sunday 22 June 2025 19:51:57 +0000 (0:00:00.671) 0:01:01.060 *********** 2025-06-22 20:02:26.019684 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.019690 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.019697 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.019703 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.019709 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.019715 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.019721 | orchestrator | 2025-06-22 20:02:26.019727 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:02:26.019734 | orchestrator | Sunday 22 June 2025 19:51:58 +0000 (0:00:01.072) 0:01:02.132 *********** 2025-06-22 20:02:26.019740 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.019746 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.019752 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.019758 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.019764 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.019770 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.019776 | orchestrator | 2025-06-22 20:02:26.019783 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:02:26.019789 | orchestrator | Sunday 22 June 2025 19:52:00 +0000 (0:00:01.150) 0:01:03.283 *********** 2025-06-22 20:02:26.019795 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.019801 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.019808 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.019814 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.019820 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.019826 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.019832 | orchestrator | 2025-06-22 20:02:26.019838 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:02:26.019862 | orchestrator | Sunday 22 June 2025 19:52:00 +0000 (0:00:00.482) 0:01:03.765 *********** 2025-06-22 20:02:26.019869 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.019876 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.019882 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.019888 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.019895 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.019901 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.019907 | orchestrator | 2025-06-22 20:02:26.019913 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:02:26.019920 | orchestrator | Sunday 22 June 2025 19:52:01 +0000 (0:00:00.683) 0:01:04.448 *********** 2025-06-22 20:02:26.019926 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.019932 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.019938 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.019944 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.019956 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.019962 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.019968 | orchestrator | 2025-06-22 20:02:26.019974 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:02:26.019981 | orchestrator | Sunday 22 June 2025 19:52:01 +0000 (0:00:00.623) 0:01:05.072 *********** 2025-06-22 20:02:26.019987 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.019993 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.019999 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.020005 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.020012 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.020018 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.020024 | orchestrator | 2025-06-22 20:02:26.020030 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:02:26.020037 | orchestrator | Sunday 22 June 2025 19:52:02 +0000 (0:00:00.671) 0:01:05.743 *********** 2025-06-22 20:02:26.020046 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.020053 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.020059 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.020065 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.020071 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.020077 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.020084 | orchestrator | 2025-06-22 20:02:26.020090 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:02:26.020096 | orchestrator | Sunday 22 June 2025 19:52:03 +0000 (0:00:00.573) 0:01:06.317 *********** 2025-06-22 20:02:26.020103 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.020109 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.020115 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.020121 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.020143 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.020150 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.020156 | orchestrator | 2025-06-22 20:02:26.020163 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:02:26.020169 | orchestrator | Sunday 22 June 2025 19:52:03 +0000 (0:00:00.610) 0:01:06.927 *********** 2025-06-22 20:02:26.020175 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.020181 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.020187 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.020193 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.020199 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.020205 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.020212 | orchestrator | 2025-06-22 20:02:26.020218 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:02:26.020224 | orchestrator | Sunday 22 June 2025 19:52:04 +0000 (0:00:00.417) 0:01:07.344 *********** 2025-06-22 20:02:26.020230 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.020236 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.020242 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.020248 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.020255 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.020261 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.020267 | orchestrator | 2025-06-22 20:02:26.020273 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:02:26.020279 | orchestrator | Sunday 22 June 2025 19:52:04 +0000 (0:00:00.775) 0:01:08.119 *********** 2025-06-22 20:02:26.020285 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.020292 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.020298 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.020304 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.020310 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.020316 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.020322 | orchestrator | 2025-06-22 20:02:26.020328 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:02:26.020338 | orchestrator | Sunday 22 June 2025 19:52:05 +0000 (0:00:00.649) 0:01:08.768 *********** 2025-06-22 20:02:26.020344 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.020351 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.020357 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.020363 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.020369 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.020375 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.020381 | orchestrator | 2025-06-22 20:02:26.020387 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-22 20:02:26.020393 | orchestrator | Sunday 22 June 2025 19:52:06 +0000 (0:00:01.035) 0:01:09.804 *********** 2025-06-22 20:02:26.020399 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.020406 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.020412 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.020418 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.020424 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.020430 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.020436 | orchestrator | 2025-06-22 20:02:26.020443 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-22 20:02:26.020449 | orchestrator | Sunday 22 June 2025 19:52:08 +0000 (0:00:01.511) 0:01:11.316 *********** 2025-06-22 20:02:26.020455 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.020461 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.020467 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.020473 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.020479 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.020485 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.020492 | orchestrator | 2025-06-22 20:02:26.020498 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-22 20:02:26.020523 | orchestrator | Sunday 22 June 2025 19:52:10 +0000 (0:00:01.943) 0:01:13.260 *********** 2025-06-22 20:02:26.020530 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.020537 | orchestrator | 2025-06-22 20:02:26.020543 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-22 20:02:26.020549 | orchestrator | Sunday 22 June 2025 19:52:11 +0000 (0:00:01.013) 0:01:14.273 *********** 2025-06-22 20:02:26.020556 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.020562 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.020568 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.020574 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.020580 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.020587 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.020593 | orchestrator | 2025-06-22 20:02:26.020599 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-22 20:02:26.020605 | orchestrator | Sunday 22 June 2025 19:52:11 +0000 (0:00:00.670) 0:01:14.943 *********** 2025-06-22 20:02:26.020611 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.020618 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.020624 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.020630 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.020636 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.020642 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.020648 | orchestrator | 2025-06-22 20:02:26.020654 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-22 20:02:26.020667 | orchestrator | Sunday 22 June 2025 19:52:12 +0000 (0:00:00.557) 0:01:15.501 *********** 2025-06-22 20:02:26.020673 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 20:02:26.020679 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 20:02:26.020685 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 20:02:26.020696 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 20:02:26.020702 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 20:02:26.020708 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 20:02:26.020714 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 20:02:26.020721 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 20:02:26.020727 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 20:02:26.020733 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 20:02:26.020739 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 20:02:26.020745 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 20:02:26.020751 | orchestrator | 2025-06-22 20:02:26.020758 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-22 20:02:26.020764 | orchestrator | Sunday 22 June 2025 19:52:13 +0000 (0:00:01.475) 0:01:16.977 *********** 2025-06-22 20:02:26.020770 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.020776 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.020782 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.020788 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.020794 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.020801 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.020807 | orchestrator | 2025-06-22 20:02:26.020813 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-22 20:02:26.020819 | orchestrator | Sunday 22 June 2025 19:52:14 +0000 (0:00:00.895) 0:01:17.872 *********** 2025-06-22 20:02:26.020825 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.020831 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.020838 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.020844 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.020850 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.020856 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.020862 | orchestrator | 2025-06-22 20:02:26.020868 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-22 20:02:26.020874 | orchestrator | Sunday 22 June 2025 19:52:15 +0000 (0:00:00.691) 0:01:18.564 *********** 2025-06-22 20:02:26.020881 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.020887 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.020893 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.020899 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.020905 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.020911 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.020917 | orchestrator | 2025-06-22 20:02:26.020923 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-22 20:02:26.020930 | orchestrator | Sunday 22 June 2025 19:52:15 +0000 (0:00:00.498) 0:01:19.063 *********** 2025-06-22 20:02:26.020936 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.020942 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.020948 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.020954 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.020960 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.020967 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.020973 | orchestrator | 2025-06-22 20:02:26.020979 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-22 20:02:26.020985 | orchestrator | Sunday 22 June 2025 19:52:16 +0000 (0:00:00.603) 0:01:19.666 *********** 2025-06-22 20:02:26.021009 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.021020 | orchestrator | 2025-06-22 20:02:26.021026 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-22 20:02:26.021032 | orchestrator | Sunday 22 June 2025 19:52:17 +0000 (0:00:00.992) 0:01:20.659 *********** 2025-06-22 20:02:26.021038 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.021045 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.021051 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.021057 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.021063 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.021069 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.021076 | orchestrator | 2025-06-22 20:02:26.021082 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-22 20:02:26.021088 | orchestrator | Sunday 22 June 2025 19:53:31 +0000 (0:01:14.543) 0:02:35.202 *********** 2025-06-22 20:02:26.021094 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 20:02:26.021101 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 20:02:26.021107 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 20:02:26.021113 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.021119 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 20:02:26.021125 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 20:02:26.021148 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 20:02:26.021155 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.021164 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 20:02:26.021170 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 20:02:26.021177 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 20:02:26.021183 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.021189 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 20:02:26.021195 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 20:02:26.021202 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 20:02:26.021208 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.021214 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 20:02:26.021220 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 20:02:26.021226 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 20:02:26.021232 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.021239 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 20:02:26.021245 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 20:02:26.021251 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 20:02:26.021257 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.021263 | orchestrator | 2025-06-22 20:02:26.021269 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-22 20:02:26.021276 | orchestrator | Sunday 22 June 2025 19:53:32 +0000 (0:00:00.864) 0:02:36.067 *********** 2025-06-22 20:02:26.021282 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.021288 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.021294 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.021300 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.021306 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.021312 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.021319 | orchestrator | 2025-06-22 20:02:26.021325 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-22 20:02:26.021336 | orchestrator | Sunday 22 June 2025 19:53:33 +0000 (0:00:00.609) 0:02:36.676 *********** 2025-06-22 20:02:26.021342 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.021348 | orchestrator | 2025-06-22 20:02:26.021354 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-22 20:02:26.021360 | orchestrator | Sunday 22 June 2025 19:53:33 +0000 (0:00:00.155) 0:02:36.831 *********** 2025-06-22 20:02:26.021366 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.021373 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.021379 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.021385 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.021391 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.021397 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.021403 | orchestrator | 2025-06-22 20:02:26.021409 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-22 20:02:26.021415 | orchestrator | Sunday 22 June 2025 19:53:34 +0000 (0:00:01.035) 0:02:37.867 *********** 2025-06-22 20:02:26.021422 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.021428 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.021434 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.021440 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.021446 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.021452 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.021458 | orchestrator | 2025-06-22 20:02:26.021464 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-22 20:02:26.021470 | orchestrator | Sunday 22 June 2025 19:53:35 +0000 (0:00:00.733) 0:02:38.600 *********** 2025-06-22 20:02:26.021476 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.021482 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.021488 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.021494 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.021501 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.021507 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.021513 | orchestrator | 2025-06-22 20:02:26.021537 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-22 20:02:26.021544 | orchestrator | Sunday 22 June 2025 19:53:36 +0000 (0:00:00.996) 0:02:39.597 *********** 2025-06-22 20:02:26.021551 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.021557 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.021563 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.021569 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.021575 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.021581 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.021587 | orchestrator | 2025-06-22 20:02:26.021594 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-22 20:02:26.021600 | orchestrator | Sunday 22 June 2025 19:53:38 +0000 (0:00:02.190) 0:02:41.787 *********** 2025-06-22 20:02:26.021606 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.021612 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.021618 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.021624 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.021630 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.021636 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.021643 | orchestrator | 2025-06-22 20:02:26.021649 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-22 20:02:26.021655 | orchestrator | Sunday 22 June 2025 19:53:39 +0000 (0:00:00.741) 0:02:42.529 *********** 2025-06-22 20:02:26.021661 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.021668 | orchestrator | 2025-06-22 20:02:26.021674 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-22 20:02:26.021683 | orchestrator | Sunday 22 June 2025 19:53:40 +0000 (0:00:01.114) 0:02:43.644 *********** 2025-06-22 20:02:26.021694 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.021700 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.021706 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.021712 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.021719 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.021725 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.021731 | orchestrator | 2025-06-22 20:02:26.021737 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-22 20:02:26.021743 | orchestrator | Sunday 22 June 2025 19:53:41 +0000 (0:00:00.690) 0:02:44.335 *********** 2025-06-22 20:02:26.021749 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.021755 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.021761 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.021767 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.021773 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.021780 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.021786 | orchestrator | 2025-06-22 20:02:26.021792 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-22 20:02:26.021798 | orchestrator | Sunday 22 June 2025 19:53:41 +0000 (0:00:00.781) 0:02:45.116 *********** 2025-06-22 20:02:26.021804 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.021810 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.021816 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.021822 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.021829 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.021835 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.021841 | orchestrator | 2025-06-22 20:02:26.021847 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-22 20:02:26.021853 | orchestrator | Sunday 22 June 2025 19:53:42 +0000 (0:00:00.674) 0:02:45.791 *********** 2025-06-22 20:02:26.021859 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.021865 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.021871 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.021877 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.021883 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.021889 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.021896 | orchestrator | 2025-06-22 20:02:26.021902 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-22 20:02:26.021908 | orchestrator | Sunday 22 June 2025 19:53:43 +0000 (0:00:00.718) 0:02:46.509 *********** 2025-06-22 20:02:26.021914 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.021920 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.021926 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.021932 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.021938 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.021944 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.021950 | orchestrator | 2025-06-22 20:02:26.021956 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-22 20:02:26.021963 | orchestrator | Sunday 22 June 2025 19:53:43 +0000 (0:00:00.580) 0:02:47.089 *********** 2025-06-22 20:02:26.021969 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.021975 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.021981 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.021987 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.021993 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.021999 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.022005 | orchestrator | 2025-06-22 20:02:26.022011 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-22 20:02:26.022038 | orchestrator | Sunday 22 June 2025 19:53:44 +0000 (0:00:00.773) 0:02:47.863 *********** 2025-06-22 20:02:26.022045 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.022051 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.022062 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.022068 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.022074 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.022081 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.022087 | orchestrator | 2025-06-22 20:02:26.022093 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-22 20:02:26.022099 | orchestrator | Sunday 22 June 2025 19:53:45 +0000 (0:00:00.541) 0:02:48.404 *********** 2025-06-22 20:02:26.022106 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.022112 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.022118 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.022124 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.022175 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.022188 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.022197 | orchestrator | 2025-06-22 20:02:26.022203 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-22 20:02:26.022209 | orchestrator | Sunday 22 June 2025 19:53:45 +0000 (0:00:00.694) 0:02:49.099 *********** 2025-06-22 20:02:26.022215 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.022222 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.022228 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.022234 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.022240 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.022246 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.022252 | orchestrator | 2025-06-22 20:02:26.022259 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-22 20:02:26.022265 | orchestrator | Sunday 22 June 2025 19:53:46 +0000 (0:00:01.121) 0:02:50.221 *********** 2025-06-22 20:02:26.022271 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.022278 | orchestrator | 2025-06-22 20:02:26.022284 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-22 20:02:26.022290 | orchestrator | Sunday 22 June 2025 19:53:48 +0000 (0:00:01.154) 0:02:51.375 *********** 2025-06-22 20:02:26.022296 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-22 20:02:26.022303 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-22 20:02:26.022309 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-22 20:02:26.022315 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-22 20:02:26.022326 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-22 20:02:26.022332 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-22 20:02:26.022339 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-22 20:02:26.022345 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-22 20:02:26.022351 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-22 20:02:26.022357 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-22 20:02:26.022363 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-22 20:02:26.022370 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-22 20:02:26.022376 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-22 20:02:26.022382 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-22 20:02:26.022388 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-22 20:02:26.022395 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-22 20:02:26.022401 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-22 20:02:26.022407 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-22 20:02:26.022413 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-22 20:02:26.022420 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-22 20:02:26.022426 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-22 20:02:26.022440 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-22 20:02:26.022447 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-22 20:02:26.022453 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-22 20:02:26.022459 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-22 20:02:26.022465 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-22 20:02:26.022471 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-22 20:02:26.022478 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-22 20:02:26.022484 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-22 20:02:26.022490 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-22 20:02:26.022496 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-22 20:02:26.022502 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-22 20:02:26.022508 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-22 20:02:26.022515 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-22 20:02:26.022521 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-22 20:02:26.022527 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-22 20:02:26.022533 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-22 20:02:26.022539 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-22 20:02:26.022545 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-22 20:02:26.022552 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-22 20:02:26.022558 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-22 20:02:26.022564 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-22 20:02:26.022570 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-22 20:02:26.022576 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-22 20:02:26.022582 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-22 20:02:26.022589 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 20:02:26.022595 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 20:02:26.022601 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 20:02:26.022607 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-22 20:02:26.022632 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 20:02:26.022639 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-22 20:02:26.022645 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 20:02:26.022651 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 20:02:26.022658 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 20:02:26.022664 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 20:02:26.022670 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-22 20:02:26.022676 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 20:02:26.022682 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 20:02:26.022688 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 20:02:26.022695 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 20:02:26.022701 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 20:02:26.022707 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 20:02:26.022713 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 20:02:26.022725 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 20:02:26.022734 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 20:02:26.022740 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 20:02:26.022747 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 20:02:26.022753 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 20:02:26.022759 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 20:02:26.022765 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 20:02:26.022771 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 20:02:26.022777 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 20:02:26.022783 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 20:02:26.022789 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 20:02:26.022796 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 20:02:26.022802 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 20:02:26.022808 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 20:02:26.022814 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 20:02:26.022820 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 20:02:26.022827 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 20:02:26.022833 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 20:02:26.022839 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-22 20:02:26.022845 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-22 20:02:26.022851 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-22 20:02:26.022857 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 20:02:26.022864 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 20:02:26.022870 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-22 20:02:26.022876 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-22 20:02:26.022882 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-22 20:02:26.022888 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-22 20:02:26.022894 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-22 20:02:26.022901 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 20:02:26.022907 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-22 20:02:26.022913 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-22 20:02:26.022919 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-22 20:02:26.022925 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-22 20:02:26.022931 | orchestrator | 2025-06-22 20:02:26.022938 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-22 20:02:26.022944 | orchestrator | Sunday 22 June 2025 19:53:55 +0000 (0:00:06.952) 0:02:58.328 *********** 2025-06-22 20:02:26.022950 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.022956 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.022962 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.022969 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.022975 | orchestrator | 2025-06-22 20:02:26.022981 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-22 20:02:26.022991 | orchestrator | Sunday 22 June 2025 19:53:56 +0000 (0:00:00.958) 0:02:59.287 *********** 2025-06-22 20:02:26.023015 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.023022 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.023029 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.023035 | orchestrator | 2025-06-22 20:02:26.023041 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-22 20:02:26.023047 | orchestrator | Sunday 22 June 2025 19:53:56 +0000 (0:00:00.748) 0:03:00.035 *********** 2025-06-22 20:02:26.023054 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.023060 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.023066 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.023072 | orchestrator | 2025-06-22 20:02:26.023078 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-22 20:02:26.023085 | orchestrator | Sunday 22 June 2025 19:53:58 +0000 (0:00:01.578) 0:03:01.614 *********** 2025-06-22 20:02:26.023091 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.023097 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.023103 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.023113 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023119 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023125 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023156 | orchestrator | 2025-06-22 20:02:26.023162 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-22 20:02:26.023169 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.641) 0:03:02.256 *********** 2025-06-22 20:02:26.023175 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.023181 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.023187 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.023193 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023200 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023206 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023212 | orchestrator | 2025-06-22 20:02:26.023218 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-22 20:02:26.023224 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.708) 0:03:02.964 *********** 2025-06-22 20:02:26.023230 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.023237 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.023243 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.023249 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023255 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023261 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023267 | orchestrator | 2025-06-22 20:02:26.023274 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-22 20:02:26.023280 | orchestrator | Sunday 22 June 2025 19:54:00 +0000 (0:00:00.518) 0:03:03.483 *********** 2025-06-22 20:02:26.023286 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.023292 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.023299 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.023305 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023311 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023317 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023323 | orchestrator | 2025-06-22 20:02:26.023329 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-22 20:02:26.023340 | orchestrator | Sunday 22 June 2025 19:54:00 +0000 (0:00:00.635) 0:03:04.118 *********** 2025-06-22 20:02:26.023347 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.023353 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.023359 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.023365 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023371 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023377 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023383 | orchestrator | 2025-06-22 20:02:26.023390 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-22 20:02:26.023396 | orchestrator | Sunday 22 June 2025 19:54:01 +0000 (0:00:00.635) 0:03:04.753 *********** 2025-06-22 20:02:26.023402 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.023409 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.023415 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.023421 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023427 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023433 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023439 | orchestrator | 2025-06-22 20:02:26.023445 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-22 20:02:26.023452 | orchestrator | Sunday 22 June 2025 19:54:02 +0000 (0:00:00.694) 0:03:05.448 *********** 2025-06-22 20:02:26.023458 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.023464 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.023470 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.023476 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023482 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023489 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023495 | orchestrator | 2025-06-22 20:02:26.023501 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-22 20:02:26.023508 | orchestrator | Sunday 22 June 2025 19:54:02 +0000 (0:00:00.555) 0:03:06.003 *********** 2025-06-22 20:02:26.023514 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.023520 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.023526 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.023532 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023539 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023545 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023551 | orchestrator | 2025-06-22 20:02:26.023577 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-22 20:02:26.023584 | orchestrator | Sunday 22 June 2025 19:54:03 +0000 (0:00:00.611) 0:03:06.615 *********** 2025-06-22 20:02:26.023590 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023596 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023602 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023608 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.023615 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.023621 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.023627 | orchestrator | 2025-06-22 20:02:26.023633 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-22 20:02:26.023639 | orchestrator | Sunday 22 June 2025 19:54:07 +0000 (0:00:03.883) 0:03:10.498 *********** 2025-06-22 20:02:26.023646 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.023652 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.023658 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.023664 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023670 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023676 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023683 | orchestrator | 2025-06-22 20:02:26.023689 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-22 20:02:26.023695 | orchestrator | Sunday 22 June 2025 19:54:07 +0000 (0:00:00.666) 0:03:11.164 *********** 2025-06-22 20:02:26.023708 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.023714 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.023720 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.023726 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023732 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023739 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023745 | orchestrator | 2025-06-22 20:02:26.023754 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-22 20:02:26.023761 | orchestrator | Sunday 22 June 2025 19:54:08 +0000 (0:00:00.635) 0:03:11.800 *********** 2025-06-22 20:02:26.023767 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.023773 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.023779 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.023785 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023791 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023797 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023803 | orchestrator | 2025-06-22 20:02:26.023810 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-22 20:02:26.023816 | orchestrator | Sunday 22 June 2025 19:54:09 +0000 (0:00:00.731) 0:03:12.531 *********** 2025-06-22 20:02:26.023822 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.023828 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.023835 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.023841 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023847 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023853 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023859 | orchestrator | 2025-06-22 20:02:26.023865 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-22 20:02:26.023872 | orchestrator | Sunday 22 June 2025 19:54:10 +0000 (0:00:00.780) 0:03:13.311 *********** 2025-06-22 20:02:26.023879 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-22 20:02:26.023887 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-22 20:02:26.023894 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.023901 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-22 20:02:26.023907 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-22 20:02:26.023914 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.023920 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-22 20:02:26.023949 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-22 20:02:26.023957 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.023963 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.023969 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.023975 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.023981 | orchestrator | 2025-06-22 20:02:26.023988 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-22 20:02:26.023994 | orchestrator | Sunday 22 June 2025 19:54:11 +0000 (0:00:01.028) 0:03:14.340 *********** 2025-06-22 20:02:26.024000 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.024006 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.024012 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.024019 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.024025 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.024031 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.024037 | orchestrator | 2025-06-22 20:02:26.024043 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-22 20:02:26.024049 | orchestrator | Sunday 22 June 2025 19:54:11 +0000 (0:00:00.676) 0:03:15.017 *********** 2025-06-22 20:02:26.024055 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.024061 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.024068 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.024074 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.024080 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.024086 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.024092 | orchestrator | 2025-06-22 20:02:26.024102 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-22 20:02:26.024108 | orchestrator | Sunday 22 June 2025 19:54:12 +0000 (0:00:00.989) 0:03:16.007 *********** 2025-06-22 20:02:26.024114 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.024121 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.024158 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.024166 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.024172 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.024178 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.024184 | orchestrator | 2025-06-22 20:02:26.024191 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-22 20:02:26.024197 | orchestrator | Sunday 22 June 2025 19:54:13 +0000 (0:00:00.694) 0:03:16.702 *********** 2025-06-22 20:02:26.024203 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.024209 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.024215 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.024221 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.024227 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.024234 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.024240 | orchestrator | 2025-06-22 20:02:26.024246 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-22 20:02:26.024252 | orchestrator | Sunday 22 June 2025 19:54:14 +0000 (0:00:00.777) 0:03:17.480 *********** 2025-06-22 20:02:26.024258 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.024264 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.024270 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.024277 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.024283 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.024289 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.024295 | orchestrator | 2025-06-22 20:02:26.024301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-22 20:02:26.024312 | orchestrator | Sunday 22 June 2025 19:54:14 +0000 (0:00:00.463) 0:03:17.944 *********** 2025-06-22 20:02:26.024318 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.024324 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.024331 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.024337 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.024343 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.024349 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.024355 | orchestrator | 2025-06-22 20:02:26.024361 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-22 20:02:26.024367 | orchestrator | Sunday 22 June 2025 19:54:15 +0000 (0:00:00.676) 0:03:18.621 *********** 2025-06-22 20:02:26.024374 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.024380 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.024386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.024392 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.024398 | orchestrator | 2025-06-22 20:02:26.024405 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-22 20:02:26.024411 | orchestrator | Sunday 22 June 2025 19:54:15 +0000 (0:00:00.357) 0:03:18.978 *********** 2025-06-22 20:02:26.024417 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.024423 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.024429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.024435 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.024441 | orchestrator | 2025-06-22 20:02:26.024448 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-22 20:02:26.024454 | orchestrator | Sunday 22 June 2025 19:54:16 +0000 (0:00:00.355) 0:03:19.334 *********** 2025-06-22 20:02:26.024460 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.024466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.024472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.024477 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.024482 | orchestrator | 2025-06-22 20:02:26.024488 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-22 20:02:26.024493 | orchestrator | Sunday 22 June 2025 19:54:16 +0000 (0:00:00.360) 0:03:19.694 *********** 2025-06-22 20:02:26.024515 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.024522 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.024527 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.024533 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.024538 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.024543 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.024549 | orchestrator | 2025-06-22 20:02:26.024554 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-22 20:02:26.024559 | orchestrator | Sunday 22 June 2025 19:54:17 +0000 (0:00:00.603) 0:03:20.298 *********** 2025-06-22 20:02:26.024565 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 20:02:26.024570 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-22 20:02:26.024575 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-22 20:02:26.024581 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-22 20:02:26.024586 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.024591 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-22 20:02:26.024597 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.024602 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-22 20:02:26.024607 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.024613 | orchestrator | 2025-06-22 20:02:26.024618 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-22 20:02:26.024624 | orchestrator | Sunday 22 June 2025 19:54:18 +0000 (0:00:01.879) 0:03:22.177 *********** 2025-06-22 20:02:26.024629 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.024648 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.024653 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.024659 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.024664 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.024669 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.024675 | orchestrator | 2025-06-22 20:02:26.024683 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 20:02:26.024689 | orchestrator | Sunday 22 June 2025 19:54:21 +0000 (0:00:02.607) 0:03:24.784 *********** 2025-06-22 20:02:26.024694 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.024699 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.024705 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.024710 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.024716 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.024721 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.024726 | orchestrator | 2025-06-22 20:02:26.024731 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-22 20:02:26.024737 | orchestrator | Sunday 22 June 2025 19:54:22 +0000 (0:00:01.088) 0:03:25.873 *********** 2025-06-22 20:02:26.024742 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.024748 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.024753 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.024758 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.024764 | orchestrator | 2025-06-22 20:02:26.024769 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-22 20:02:26.024774 | orchestrator | Sunday 22 June 2025 19:54:23 +0000 (0:00:00.829) 0:03:26.702 *********** 2025-06-22 20:02:26.024780 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.024785 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.024791 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.024796 | orchestrator | 2025-06-22 20:02:26.024801 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-22 20:02:26.024807 | orchestrator | Sunday 22 June 2025 19:54:23 +0000 (0:00:00.317) 0:03:27.019 *********** 2025-06-22 20:02:26.024812 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.024818 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.024823 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.024828 | orchestrator | 2025-06-22 20:02:26.024833 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-22 20:02:26.024839 | orchestrator | Sunday 22 June 2025 19:54:25 +0000 (0:00:01.312) 0:03:28.332 *********** 2025-06-22 20:02:26.024844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 20:02:26.024850 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 20:02:26.024855 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 20:02:26.024860 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.024866 | orchestrator | 2025-06-22 20:02:26.024871 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-22 20:02:26.024876 | orchestrator | Sunday 22 June 2025 19:54:25 +0000 (0:00:00.873) 0:03:29.206 *********** 2025-06-22 20:02:26.024881 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.024887 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.024892 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.024898 | orchestrator | 2025-06-22 20:02:26.024903 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-22 20:02:26.024908 | orchestrator | Sunday 22 June 2025 19:54:26 +0000 (0:00:00.291) 0:03:29.497 *********** 2025-06-22 20:02:26.024914 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.024919 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.024925 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.024930 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.024939 | orchestrator | 2025-06-22 20:02:26.024944 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-22 20:02:26.024950 | orchestrator | Sunday 22 June 2025 19:54:27 +0000 (0:00:00.858) 0:03:30.355 *********** 2025-06-22 20:02:26.024955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.024960 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.024966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.024971 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.024976 | orchestrator | 2025-06-22 20:02:26.024982 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-22 20:02:26.024987 | orchestrator | Sunday 22 June 2025 19:54:27 +0000 (0:00:00.351) 0:03:30.707 *********** 2025-06-22 20:02:26.024993 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025013 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.025019 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.025025 | orchestrator | 2025-06-22 20:02:26.025030 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-22 20:02:26.025036 | orchestrator | Sunday 22 June 2025 19:54:27 +0000 (0:00:00.303) 0:03:31.011 *********** 2025-06-22 20:02:26.025041 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025047 | orchestrator | 2025-06-22 20:02:26.025052 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-22 20:02:26.025058 | orchestrator | Sunday 22 June 2025 19:54:27 +0000 (0:00:00.200) 0:03:31.212 *********** 2025-06-22 20:02:26.025063 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025068 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.025074 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.025079 | orchestrator | 2025-06-22 20:02:26.025085 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-22 20:02:26.025090 | orchestrator | Sunday 22 June 2025 19:54:28 +0000 (0:00:00.280) 0:03:31.492 *********** 2025-06-22 20:02:26.025095 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025101 | orchestrator | 2025-06-22 20:02:26.025106 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-22 20:02:26.025112 | orchestrator | Sunday 22 June 2025 19:54:28 +0000 (0:00:00.227) 0:03:31.719 *********** 2025-06-22 20:02:26.025117 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025122 | orchestrator | 2025-06-22 20:02:26.025146 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-22 20:02:26.025152 | orchestrator | Sunday 22 June 2025 19:54:28 +0000 (0:00:00.209) 0:03:31.929 *********** 2025-06-22 20:02:26.025158 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025163 | orchestrator | 2025-06-22 20:02:26.025172 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-22 20:02:26.025177 | orchestrator | Sunday 22 June 2025 19:54:28 +0000 (0:00:00.108) 0:03:32.038 *********** 2025-06-22 20:02:26.025183 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025188 | orchestrator | 2025-06-22 20:02:26.025194 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-22 20:02:26.025199 | orchestrator | Sunday 22 June 2025 19:54:29 +0000 (0:00:00.559) 0:03:32.598 *********** 2025-06-22 20:02:26.025204 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025210 | orchestrator | 2025-06-22 20:02:26.025215 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-22 20:02:26.025221 | orchestrator | Sunday 22 June 2025 19:54:29 +0000 (0:00:00.182) 0:03:32.780 *********** 2025-06-22 20:02:26.025226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.025232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.025238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.025243 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025249 | orchestrator | 2025-06-22 20:02:26.025254 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-22 20:02:26.025264 | orchestrator | Sunday 22 June 2025 19:54:29 +0000 (0:00:00.366) 0:03:33.147 *********** 2025-06-22 20:02:26.025269 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025275 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.025280 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.025285 | orchestrator | 2025-06-22 20:02:26.025291 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-22 20:02:26.025296 | orchestrator | Sunday 22 June 2025 19:54:30 +0000 (0:00:00.264) 0:03:33.411 *********** 2025-06-22 20:02:26.025302 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025307 | orchestrator | 2025-06-22 20:02:26.025313 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-22 20:02:26.025318 | orchestrator | Sunday 22 June 2025 19:54:30 +0000 (0:00:00.203) 0:03:33.615 *********** 2025-06-22 20:02:26.025324 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025329 | orchestrator | 2025-06-22 20:02:26.025334 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-22 20:02:26.025340 | orchestrator | Sunday 22 June 2025 19:54:30 +0000 (0:00:00.204) 0:03:33.819 *********** 2025-06-22 20:02:26.025345 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.025351 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.025356 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.025361 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.025367 | orchestrator | 2025-06-22 20:02:26.025372 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-22 20:02:26.025378 | orchestrator | Sunday 22 June 2025 19:54:31 +0000 (0:00:00.929) 0:03:34.749 *********** 2025-06-22 20:02:26.025383 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.025388 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.025394 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.025399 | orchestrator | 2025-06-22 20:02:26.025405 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-22 20:02:26.025410 | orchestrator | Sunday 22 June 2025 19:54:31 +0000 (0:00:00.276) 0:03:35.026 *********** 2025-06-22 20:02:26.025416 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.025421 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.025426 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.025432 | orchestrator | 2025-06-22 20:02:26.025437 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-22 20:02:26.025443 | orchestrator | Sunday 22 June 2025 19:54:32 +0000 (0:00:01.173) 0:03:36.199 *********** 2025-06-22 20:02:26.025448 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.025454 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.025459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.025464 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025470 | orchestrator | 2025-06-22 20:02:26.025475 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-22 20:02:26.025497 | orchestrator | Sunday 22 June 2025 19:54:33 +0000 (0:00:00.732) 0:03:36.931 *********** 2025-06-22 20:02:26.025503 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.025509 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.025514 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.025520 | orchestrator | 2025-06-22 20:02:26.025525 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-22 20:02:26.025531 | orchestrator | Sunday 22 June 2025 19:54:34 +0000 (0:00:00.446) 0:03:37.378 *********** 2025-06-22 20:02:26.025536 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.025541 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.025547 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.025552 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.025561 | orchestrator | 2025-06-22 20:02:26.025567 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-22 20:02:26.025572 | orchestrator | Sunday 22 June 2025 19:54:35 +0000 (0:00:00.873) 0:03:38.252 *********** 2025-06-22 20:02:26.025577 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.025583 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.025588 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.025594 | orchestrator | 2025-06-22 20:02:26.025599 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-22 20:02:26.025604 | orchestrator | Sunday 22 June 2025 19:54:35 +0000 (0:00:00.625) 0:03:38.877 *********** 2025-06-22 20:02:26.025610 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.025615 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.025620 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.025626 | orchestrator | 2025-06-22 20:02:26.025631 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-22 20:02:26.025640 | orchestrator | Sunday 22 June 2025 19:54:37 +0000 (0:00:01.398) 0:03:40.276 *********** 2025-06-22 20:02:26.025645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.025651 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.025656 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.025661 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025667 | orchestrator | 2025-06-22 20:02:26.025672 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-22 20:02:26.025677 | orchestrator | Sunday 22 June 2025 19:54:37 +0000 (0:00:00.612) 0:03:40.889 *********** 2025-06-22 20:02:26.025683 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.025688 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.025694 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.025699 | orchestrator | 2025-06-22 20:02:26.025704 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-22 20:02:26.025710 | orchestrator | Sunday 22 June 2025 19:54:37 +0000 (0:00:00.349) 0:03:41.238 *********** 2025-06-22 20:02:26.025715 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025721 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.025726 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.025731 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.025737 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.025742 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.025747 | orchestrator | 2025-06-22 20:02:26.025753 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-22 20:02:26.025758 | orchestrator | Sunday 22 June 2025 19:54:38 +0000 (0:00:00.881) 0:03:42.120 *********** 2025-06-22 20:02:26.025763 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.025769 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.025774 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.025779 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.025785 | orchestrator | 2025-06-22 20:02:26.025790 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-22 20:02:26.025796 | orchestrator | Sunday 22 June 2025 19:54:39 +0000 (0:00:01.071) 0:03:43.191 *********** 2025-06-22 20:02:26.025801 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.025806 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.025812 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.025817 | orchestrator | 2025-06-22 20:02:26.025822 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-22 20:02:26.025828 | orchestrator | Sunday 22 June 2025 19:54:40 +0000 (0:00:00.408) 0:03:43.600 *********** 2025-06-22 20:02:26.025833 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.025838 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.025844 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.025853 | orchestrator | 2025-06-22 20:02:26.025858 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-22 20:02:26.025863 | orchestrator | Sunday 22 June 2025 19:54:41 +0000 (0:00:01.356) 0:03:44.957 *********** 2025-06-22 20:02:26.025869 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 20:02:26.025874 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 20:02:26.025880 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 20:02:26.025885 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.025890 | orchestrator | 2025-06-22 20:02:26.025896 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-22 20:02:26.025901 | orchestrator | Sunday 22 June 2025 19:54:42 +0000 (0:00:01.021) 0:03:45.978 *********** 2025-06-22 20:02:26.025907 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.025912 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.025917 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.025923 | orchestrator | 2025-06-22 20:02:26.025928 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-22 20:02:26.025934 | orchestrator | 2025-06-22 20:02:26.025939 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:02:26.025944 | orchestrator | Sunday 22 June 2025 19:54:43 +0000 (0:00:00.794) 0:03:46.773 *********** 2025-06-22 20:02:26.025950 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.025955 | orchestrator | 2025-06-22 20:02:26.025977 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:02:26.025984 | orchestrator | Sunday 22 June 2025 19:54:43 +0000 (0:00:00.429) 0:03:47.202 *********** 2025-06-22 20:02:26.025989 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.025995 | orchestrator | 2025-06-22 20:02:26.026000 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:02:26.026005 | orchestrator | Sunday 22 June 2025 19:54:44 +0000 (0:00:00.628) 0:03:47.831 *********** 2025-06-22 20:02:26.026011 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.026033 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.026039 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.026044 | orchestrator | 2025-06-22 20:02:26.026050 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:02:26.026055 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:00.749) 0:03:48.580 *********** 2025-06-22 20:02:26.026061 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.026066 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.026072 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.026077 | orchestrator | 2025-06-22 20:02:26.026083 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:02:26.026088 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:00.324) 0:03:48.904 *********** 2025-06-22 20:02:26.026093 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.026099 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.026104 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.026109 | orchestrator | 2025-06-22 20:02:26.026115 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:02:26.026123 | orchestrator | Sunday 22 June 2025 19:54:46 +0000 (0:00:00.376) 0:03:49.281 *********** 2025-06-22 20:02:26.026146 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.026155 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.026165 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.026173 | orchestrator | 2025-06-22 20:02:26.026182 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:02:26.026191 | orchestrator | Sunday 22 June 2025 19:54:46 +0000 (0:00:00.492) 0:03:49.773 *********** 2025-06-22 20:02:26.026198 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.026203 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.026215 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.026220 | orchestrator | 2025-06-22 20:02:26.026226 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:02:26.026231 | orchestrator | Sunday 22 June 2025 19:54:47 +0000 (0:00:00.789) 0:03:50.563 *********** 2025-06-22 20:02:26.026236 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.026242 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.026247 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.026253 | orchestrator | 2025-06-22 20:02:26.026258 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:02:26.026263 | orchestrator | Sunday 22 June 2025 19:54:47 +0000 (0:00:00.328) 0:03:50.891 *********** 2025-06-22 20:02:26.026269 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.026274 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.026279 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.026285 | orchestrator | 2025-06-22 20:02:26.026290 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:02:26.026295 | orchestrator | Sunday 22 June 2025 19:54:47 +0000 (0:00:00.312) 0:03:51.204 *********** 2025-06-22 20:02:26.026301 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.026306 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.026311 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.026317 | orchestrator | 2025-06-22 20:02:26.026322 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:02:26.026328 | orchestrator | Sunday 22 June 2025 19:54:49 +0000 (0:00:01.154) 0:03:52.358 *********** 2025-06-22 20:02:26.026333 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.026338 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.026344 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.026349 | orchestrator | 2025-06-22 20:02:26.026354 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:02:26.026360 | orchestrator | Sunday 22 June 2025 19:54:49 +0000 (0:00:00.842) 0:03:53.201 *********** 2025-06-22 20:02:26.026365 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.026371 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.026376 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.026381 | orchestrator | 2025-06-22 20:02:26.026387 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:02:26.026392 | orchestrator | Sunday 22 June 2025 19:54:50 +0000 (0:00:00.343) 0:03:53.544 *********** 2025-06-22 20:02:26.026397 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.026403 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.026408 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.026413 | orchestrator | 2025-06-22 20:02:26.026419 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:02:26.026424 | orchestrator | Sunday 22 June 2025 19:54:50 +0000 (0:00:00.409) 0:03:53.954 *********** 2025-06-22 20:02:26.026430 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.026435 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.026440 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.026446 | orchestrator | 2025-06-22 20:02:26.026451 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:02:26.026457 | orchestrator | Sunday 22 June 2025 19:54:51 +0000 (0:00:00.581) 0:03:54.535 *********** 2025-06-22 20:02:26.026462 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.026467 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.026473 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.026478 | orchestrator | 2025-06-22 20:02:26.026483 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:02:26.026489 | orchestrator | Sunday 22 June 2025 19:54:51 +0000 (0:00:00.317) 0:03:54.852 *********** 2025-06-22 20:02:26.026494 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.026499 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.026505 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.026516 | orchestrator | 2025-06-22 20:02:26.026541 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:02:26.026548 | orchestrator | Sunday 22 June 2025 19:54:51 +0000 (0:00:00.323) 0:03:55.176 *********** 2025-06-22 20:02:26.026553 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.026559 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.026564 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.026569 | orchestrator | 2025-06-22 20:02:26.026575 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:02:26.026580 | orchestrator | Sunday 22 June 2025 19:54:52 +0000 (0:00:00.319) 0:03:55.496 *********** 2025-06-22 20:02:26.026585 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.026591 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.026596 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.026601 | orchestrator | 2025-06-22 20:02:26.026607 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:02:26.026612 | orchestrator | Sunday 22 June 2025 19:54:52 +0000 (0:00:00.458) 0:03:55.954 *********** 2025-06-22 20:02:26.026618 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.026623 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.026628 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.026634 | orchestrator | 2025-06-22 20:02:26.026639 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:02:26.026645 | orchestrator | Sunday 22 June 2025 19:54:53 +0000 (0:00:00.307) 0:03:56.261 *********** 2025-06-22 20:02:26.026650 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.026656 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.026661 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.026666 | orchestrator | 2025-06-22 20:02:26.026672 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:02:26.026680 | orchestrator | Sunday 22 June 2025 19:54:53 +0000 (0:00:00.292) 0:03:56.553 *********** 2025-06-22 20:02:26.026686 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.026691 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.026696 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.026702 | orchestrator | 2025-06-22 20:02:26.026707 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-22 20:02:26.026713 | orchestrator | Sunday 22 June 2025 19:54:53 +0000 (0:00:00.666) 0:03:57.220 *********** 2025-06-22 20:02:26.026718 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.026723 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.026729 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.026734 | orchestrator | 2025-06-22 20:02:26.026740 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-22 20:02:26.026745 | orchestrator | Sunday 22 June 2025 19:54:54 +0000 (0:00:00.341) 0:03:57.561 *********** 2025-06-22 20:02:26.026751 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.026756 | orchestrator | 2025-06-22 20:02:26.026762 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-22 20:02:26.026767 | orchestrator | Sunday 22 June 2025 19:54:54 +0000 (0:00:00.517) 0:03:58.079 *********** 2025-06-22 20:02:26.026772 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.026778 | orchestrator | 2025-06-22 20:02:26.026783 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-22 20:02:26.026788 | orchestrator | Sunday 22 June 2025 19:54:54 +0000 (0:00:00.118) 0:03:58.198 *********** 2025-06-22 20:02:26.026794 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 20:02:26.026799 | orchestrator | 2025-06-22 20:02:26.026805 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-22 20:02:26.026810 | orchestrator | Sunday 22 June 2025 19:54:55 +0000 (0:00:00.846) 0:03:59.045 *********** 2025-06-22 20:02:26.026815 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.026821 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.026826 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.026835 | orchestrator | 2025-06-22 20:02:26.026841 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-22 20:02:26.026846 | orchestrator | Sunday 22 June 2025 19:54:56 +0000 (0:00:00.552) 0:03:59.597 *********** 2025-06-22 20:02:26.026851 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.026857 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.026862 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.026867 | orchestrator | 2025-06-22 20:02:26.026873 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-22 20:02:26.026878 | orchestrator | Sunday 22 June 2025 19:54:56 +0000 (0:00:00.382) 0:03:59.979 *********** 2025-06-22 20:02:26.026883 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.026889 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.026894 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.026900 | orchestrator | 2025-06-22 20:02:26.026905 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-22 20:02:26.026910 | orchestrator | Sunday 22 June 2025 19:54:57 +0000 (0:00:01.241) 0:04:01.221 *********** 2025-06-22 20:02:26.026916 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.026921 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.026926 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.026932 | orchestrator | 2025-06-22 20:02:26.026937 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-22 20:02:26.026943 | orchestrator | Sunday 22 June 2025 19:54:58 +0000 (0:00:00.797) 0:04:02.019 *********** 2025-06-22 20:02:26.026948 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.026953 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.026959 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.026964 | orchestrator | 2025-06-22 20:02:26.026969 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-22 20:02:26.026975 | orchestrator | Sunday 22 June 2025 19:54:59 +0000 (0:00:00.844) 0:04:02.864 *********** 2025-06-22 20:02:26.026980 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.026985 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.026991 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.026996 | orchestrator | 2025-06-22 20:02:26.027002 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-22 20:02:26.027007 | orchestrator | Sunday 22 June 2025 19:55:00 +0000 (0:00:00.806) 0:04:03.670 *********** 2025-06-22 20:02:26.027013 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.027018 | orchestrator | 2025-06-22 20:02:26.027039 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-22 20:02:26.027045 | orchestrator | Sunday 22 June 2025 19:55:01 +0000 (0:00:01.401) 0:04:05.071 *********** 2025-06-22 20:02:26.027050 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.027056 | orchestrator | 2025-06-22 20:02:26.027061 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-22 20:02:26.027067 | orchestrator | Sunday 22 June 2025 19:55:02 +0000 (0:00:00.675) 0:04:05.746 *********** 2025-06-22 20:02:26.027072 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:02:26.027078 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.027083 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.027088 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:02:26.027094 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-22 20:02:26.027099 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:02:26.027105 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:02:26.027110 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-22 20:02:26.027116 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:02:26.027121 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-22 20:02:26.027165 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-22 20:02:26.027171 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-22 20:02:26.027177 | orchestrator | 2025-06-22 20:02:26.027186 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-22 20:02:26.027191 | orchestrator | Sunday 22 June 2025 19:55:06 +0000 (0:00:03.595) 0:04:09.341 *********** 2025-06-22 20:02:26.027196 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.027202 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.027207 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.027213 | orchestrator | 2025-06-22 20:02:26.027218 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-22 20:02:26.027223 | orchestrator | Sunday 22 June 2025 19:55:07 +0000 (0:00:01.496) 0:04:10.838 *********** 2025-06-22 20:02:26.027229 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.027234 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.027240 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.027245 | orchestrator | 2025-06-22 20:02:26.027250 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-22 20:02:26.027256 | orchestrator | Sunday 22 June 2025 19:55:07 +0000 (0:00:00.310) 0:04:11.148 *********** 2025-06-22 20:02:26.027261 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.027267 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.027272 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.027277 | orchestrator | 2025-06-22 20:02:26.027283 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-22 20:02:26.027288 | orchestrator | Sunday 22 June 2025 19:55:08 +0000 (0:00:00.249) 0:04:11.398 *********** 2025-06-22 20:02:26.027294 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.027299 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.027304 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.027310 | orchestrator | 2025-06-22 20:02:26.027315 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-22 20:02:26.027321 | orchestrator | Sunday 22 June 2025 19:55:10 +0000 (0:00:01.991) 0:04:13.390 *********** 2025-06-22 20:02:26.027326 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.027332 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.027337 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.027342 | orchestrator | 2025-06-22 20:02:26.027348 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-22 20:02:26.027353 | orchestrator | Sunday 22 June 2025 19:55:12 +0000 (0:00:01.875) 0:04:15.265 *********** 2025-06-22 20:02:26.027359 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.027364 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.027369 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.027375 | orchestrator | 2025-06-22 20:02:26.027380 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-22 20:02:26.027386 | orchestrator | Sunday 22 June 2025 19:55:12 +0000 (0:00:00.240) 0:04:15.505 *********** 2025-06-22 20:02:26.027391 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.027396 | orchestrator | 2025-06-22 20:02:26.027402 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-22 20:02:26.027407 | orchestrator | Sunday 22 June 2025 19:55:12 +0000 (0:00:00.579) 0:04:16.085 *********** 2025-06-22 20:02:26.027413 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.027418 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.027424 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.027429 | orchestrator | 2025-06-22 20:02:26.027435 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-22 20:02:26.027440 | orchestrator | Sunday 22 June 2025 19:55:13 +0000 (0:00:00.480) 0:04:16.566 *********** 2025-06-22 20:02:26.027446 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.027451 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.027456 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.027465 | orchestrator | 2025-06-22 20:02:26.027471 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-22 20:02:26.027476 | orchestrator | Sunday 22 June 2025 19:55:13 +0000 (0:00:00.290) 0:04:16.856 *********** 2025-06-22 20:02:26.027482 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.027487 | orchestrator | 2025-06-22 20:02:26.027492 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-22 20:02:26.027498 | orchestrator | Sunday 22 June 2025 19:55:14 +0000 (0:00:00.571) 0:04:17.428 *********** 2025-06-22 20:02:26.027503 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.027509 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.027514 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.027519 | orchestrator | 2025-06-22 20:02:26.027542 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-22 20:02:26.027548 | orchestrator | Sunday 22 June 2025 19:55:16 +0000 (0:00:02.033) 0:04:19.461 *********** 2025-06-22 20:02:26.027554 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.027559 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.027565 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.027570 | orchestrator | 2025-06-22 20:02:26.027575 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-22 20:02:26.027581 | orchestrator | Sunday 22 June 2025 19:55:17 +0000 (0:00:01.099) 0:04:20.561 *********** 2025-06-22 20:02:26.027586 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.027592 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.027597 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.027602 | orchestrator | 2025-06-22 20:02:26.027608 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-22 20:02:26.027613 | orchestrator | Sunday 22 June 2025 19:55:19 +0000 (0:00:01.904) 0:04:22.466 *********** 2025-06-22 20:02:26.027619 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.027624 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.027629 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.027635 | orchestrator | 2025-06-22 20:02:26.027640 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-22 20:02:26.027646 | orchestrator | Sunday 22 June 2025 19:55:21 +0000 (0:00:01.936) 0:04:24.402 *********** 2025-06-22 20:02:26.027651 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.027657 | orchestrator | 2025-06-22 20:02:26.027665 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-22 20:02:26.027670 | orchestrator | Sunday 22 June 2025 19:55:21 +0000 (0:00:00.646) 0:04:25.048 *********** 2025-06-22 20:02:26.027676 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-22 20:02:26.027681 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.027687 | orchestrator | 2025-06-22 20:02:26.027692 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-22 20:02:26.027697 | orchestrator | Sunday 22 June 2025 19:55:43 +0000 (0:00:22.003) 0:04:47.052 *********** 2025-06-22 20:02:26.027702 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.027707 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.027712 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.027717 | orchestrator | 2025-06-22 20:02:26.027721 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-22 20:02:26.027726 | orchestrator | Sunday 22 June 2025 19:55:54 +0000 (0:00:10.580) 0:04:57.633 *********** 2025-06-22 20:02:26.027731 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.027736 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.027741 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.027746 | orchestrator | 2025-06-22 20:02:26.027751 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-22 20:02:26.027755 | orchestrator | Sunday 22 June 2025 19:55:54 +0000 (0:00:00.345) 0:04:57.978 *********** 2025-06-22 20:02:26.027766 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__62a3397c07413b1a4180e3f8e5f53aa142ee17fa'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-22 20:02:26.027773 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__62a3397c07413b1a4180e3f8e5f53aa142ee17fa'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-22 20:02:26.027778 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__62a3397c07413b1a4180e3f8e5f53aa142ee17fa'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-22 20:02:26.027784 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__62a3397c07413b1a4180e3f8e5f53aa142ee17fa'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-22 20:02:26.027790 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__62a3397c07413b1a4180e3f8e5f53aa142ee17fa'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-22 20:02:26.027808 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__62a3397c07413b1a4180e3f8e5f53aa142ee17fa'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__62a3397c07413b1a4180e3f8e5f53aa142ee17fa'}])  2025-06-22 20:02:26.027815 | orchestrator | 2025-06-22 20:02:26.027820 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 20:02:26.027825 | orchestrator | Sunday 22 June 2025 19:56:11 +0000 (0:00:16.577) 0:05:14.555 *********** 2025-06-22 20:02:26.027830 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.027835 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.027839 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.027844 | orchestrator | 2025-06-22 20:02:26.027849 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-22 20:02:26.027854 | orchestrator | Sunday 22 June 2025 19:56:11 +0000 (0:00:00.343) 0:05:14.898 *********** 2025-06-22 20:02:26.027858 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.027863 | orchestrator | 2025-06-22 20:02:26.027868 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-22 20:02:26.027873 | orchestrator | Sunday 22 June 2025 19:56:12 +0000 (0:00:00.752) 0:05:15.651 *********** 2025-06-22 20:02:26.027877 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.027882 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.027890 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.027894 | orchestrator | 2025-06-22 20:02:26.027899 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-22 20:02:26.027904 | orchestrator | Sunday 22 June 2025 19:56:12 +0000 (0:00:00.404) 0:05:16.055 *********** 2025-06-22 20:02:26.027913 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.027918 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.027923 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.027928 | orchestrator | 2025-06-22 20:02:26.027933 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-22 20:02:26.027937 | orchestrator | Sunday 22 June 2025 19:56:13 +0000 (0:00:00.400) 0:05:16.457 *********** 2025-06-22 20:02:26.027942 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 20:02:26.027947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 20:02:26.027952 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 20:02:26.027957 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.027961 | orchestrator | 2025-06-22 20:02:26.027966 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-22 20:02:26.027971 | orchestrator | Sunday 22 June 2025 19:56:14 +0000 (0:00:00.848) 0:05:17.305 *********** 2025-06-22 20:02:26.027976 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.027981 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.027986 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.027990 | orchestrator | 2025-06-22 20:02:26.027995 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-22 20:02:26.028000 | orchestrator | 2025-06-22 20:02:26.028005 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:02:26.028010 | orchestrator | Sunday 22 June 2025 19:56:14 +0000 (0:00:00.759) 0:05:18.065 *********** 2025-06-22 20:02:26.028015 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.028019 | orchestrator | 2025-06-22 20:02:26.028024 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:02:26.028029 | orchestrator | Sunday 22 June 2025 19:56:15 +0000 (0:00:00.544) 0:05:18.609 *********** 2025-06-22 20:02:26.028034 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.028039 | orchestrator | 2025-06-22 20:02:26.028043 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:02:26.028048 | orchestrator | Sunday 22 June 2025 19:56:16 +0000 (0:00:00.765) 0:05:19.374 *********** 2025-06-22 20:02:26.028053 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.028058 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.028062 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.028067 | orchestrator | 2025-06-22 20:02:26.028072 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:02:26.028077 | orchestrator | Sunday 22 June 2025 19:56:16 +0000 (0:00:00.701) 0:05:20.076 *********** 2025-06-22 20:02:26.028082 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028086 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028091 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028096 | orchestrator | 2025-06-22 20:02:26.028101 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:02:26.028105 | orchestrator | Sunday 22 June 2025 19:56:17 +0000 (0:00:00.313) 0:05:20.390 *********** 2025-06-22 20:02:26.028110 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028115 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028120 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028125 | orchestrator | 2025-06-22 20:02:26.028144 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:02:26.028150 | orchestrator | Sunday 22 June 2025 19:56:17 +0000 (0:00:00.526) 0:05:20.917 *********** 2025-06-22 20:02:26.028155 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028159 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028164 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028169 | orchestrator | 2025-06-22 20:02:26.028174 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:02:26.028183 | orchestrator | Sunday 22 June 2025 19:56:17 +0000 (0:00:00.316) 0:05:21.234 *********** 2025-06-22 20:02:26.028188 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.028193 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.028212 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.028218 | orchestrator | 2025-06-22 20:02:26.028223 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:02:26.028228 | orchestrator | Sunday 22 June 2025 19:56:18 +0000 (0:00:00.712) 0:05:21.946 *********** 2025-06-22 20:02:26.028232 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028237 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028242 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028247 | orchestrator | 2025-06-22 20:02:26.028252 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:02:26.028256 | orchestrator | Sunday 22 June 2025 19:56:18 +0000 (0:00:00.300) 0:05:22.247 *********** 2025-06-22 20:02:26.028261 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028266 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028271 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028276 | orchestrator | 2025-06-22 20:02:26.028280 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:02:26.028285 | orchestrator | Sunday 22 June 2025 19:56:19 +0000 (0:00:00.548) 0:05:22.795 *********** 2025-06-22 20:02:26.028290 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.028295 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.028300 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.028304 | orchestrator | 2025-06-22 20:02:26.028309 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:02:26.028314 | orchestrator | Sunday 22 June 2025 19:56:20 +0000 (0:00:00.757) 0:05:23.553 *********** 2025-06-22 20:02:26.028319 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.028324 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.028328 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.028333 | orchestrator | 2025-06-22 20:02:26.028341 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:02:26.028346 | orchestrator | Sunday 22 June 2025 19:56:21 +0000 (0:00:00.860) 0:05:24.414 *********** 2025-06-22 20:02:26.028351 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028356 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028360 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028365 | orchestrator | 2025-06-22 20:02:26.028370 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:02:26.028375 | orchestrator | Sunday 22 June 2025 19:56:21 +0000 (0:00:00.321) 0:05:24.735 *********** 2025-06-22 20:02:26.028380 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.028385 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.028389 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.028394 | orchestrator | 2025-06-22 20:02:26.028399 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:02:26.028404 | orchestrator | Sunday 22 June 2025 19:56:22 +0000 (0:00:00.537) 0:05:25.273 *********** 2025-06-22 20:02:26.028409 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028413 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028418 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028423 | orchestrator | 2025-06-22 20:02:26.028428 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:02:26.028433 | orchestrator | Sunday 22 June 2025 19:56:22 +0000 (0:00:00.338) 0:05:25.611 *********** 2025-06-22 20:02:26.028437 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028442 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028447 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028452 | orchestrator | 2025-06-22 20:02:26.028457 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:02:26.028462 | orchestrator | Sunday 22 June 2025 19:56:22 +0000 (0:00:00.293) 0:05:25.905 *********** 2025-06-22 20:02:26.028470 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028475 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028480 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028484 | orchestrator | 2025-06-22 20:02:26.028489 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:02:26.028494 | orchestrator | Sunday 22 June 2025 19:56:22 +0000 (0:00:00.300) 0:05:26.205 *********** 2025-06-22 20:02:26.028499 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028504 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028508 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028513 | orchestrator | 2025-06-22 20:02:26.028518 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:02:26.028523 | orchestrator | Sunday 22 June 2025 19:56:23 +0000 (0:00:00.544) 0:05:26.750 *********** 2025-06-22 20:02:26.028528 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028532 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028537 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028542 | orchestrator | 2025-06-22 20:02:26.028547 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:02:26.028552 | orchestrator | Sunday 22 June 2025 19:56:23 +0000 (0:00:00.309) 0:05:27.059 *********** 2025-06-22 20:02:26.028556 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.028561 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.028566 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.028571 | orchestrator | 2025-06-22 20:02:26.028576 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:02:26.028581 | orchestrator | Sunday 22 June 2025 19:56:24 +0000 (0:00:00.329) 0:05:27.389 *********** 2025-06-22 20:02:26.028585 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.028590 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.028595 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.028600 | orchestrator | 2025-06-22 20:02:26.028604 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:02:26.028609 | orchestrator | Sunday 22 June 2025 19:56:24 +0000 (0:00:00.342) 0:05:27.731 *********** 2025-06-22 20:02:26.028614 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.028619 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.028624 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.028629 | orchestrator | 2025-06-22 20:02:26.028634 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-22 20:02:26.028638 | orchestrator | Sunday 22 June 2025 19:56:25 +0000 (0:00:00.772) 0:05:28.503 *********** 2025-06-22 20:02:26.028643 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 20:02:26.028661 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:02:26.028667 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:02:26.028672 | orchestrator | 2025-06-22 20:02:26.028676 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-22 20:02:26.028681 | orchestrator | Sunday 22 June 2025 19:56:25 +0000 (0:00:00.667) 0:05:29.171 *********** 2025-06-22 20:02:26.028686 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.028691 | orchestrator | 2025-06-22 20:02:26.028696 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-22 20:02:26.028700 | orchestrator | Sunday 22 June 2025 19:56:26 +0000 (0:00:00.529) 0:05:29.701 *********** 2025-06-22 20:02:26.028705 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.028710 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.028715 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.028719 | orchestrator | 2025-06-22 20:02:26.028724 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-22 20:02:26.028729 | orchestrator | Sunday 22 June 2025 19:56:27 +0000 (0:00:01.144) 0:05:30.845 *********** 2025-06-22 20:02:26.028737 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028742 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028746 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028751 | orchestrator | 2025-06-22 20:02:26.028756 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-22 20:02:26.028761 | orchestrator | Sunday 22 June 2025 19:56:27 +0000 (0:00:00.368) 0:05:31.214 *********** 2025-06-22 20:02:26.028766 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:02:26.028773 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:02:26.028778 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:02:26.028783 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-22 20:02:26.028788 | orchestrator | 2025-06-22 20:02:26.028793 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-22 20:02:26.028797 | orchestrator | Sunday 22 June 2025 19:56:38 +0000 (0:00:10.921) 0:05:42.135 *********** 2025-06-22 20:02:26.028802 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.028807 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.028812 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.028816 | orchestrator | 2025-06-22 20:02:26.028821 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-22 20:02:26.028826 | orchestrator | Sunday 22 June 2025 19:56:39 +0000 (0:00:00.417) 0:05:42.553 *********** 2025-06-22 20:02:26.028831 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-22 20:02:26.028836 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 20:02:26.028840 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 20:02:26.028845 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.028850 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-22 20:02:26.028855 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.028860 | orchestrator | 2025-06-22 20:02:26.028864 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-22 20:02:26.028869 | orchestrator | Sunday 22 June 2025 19:56:41 +0000 (0:00:02.162) 0:05:44.715 *********** 2025-06-22 20:02:26.028874 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-22 20:02:26.028879 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 20:02:26.028884 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 20:02:26.028888 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:02:26.028893 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-22 20:02:26.028898 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-22 20:02:26.028902 | orchestrator | 2025-06-22 20:02:26.028907 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-22 20:02:26.028912 | orchestrator | Sunday 22 June 2025 19:56:43 +0000 (0:00:01.725) 0:05:46.440 *********** 2025-06-22 20:02:26.028917 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.028921 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.028926 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.028931 | orchestrator | 2025-06-22 20:02:26.028936 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-22 20:02:26.028940 | orchestrator | Sunday 22 June 2025 19:56:43 +0000 (0:00:00.700) 0:05:47.140 *********** 2025-06-22 20:02:26.028945 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028950 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028954 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028959 | orchestrator | 2025-06-22 20:02:26.028964 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-22 20:02:26.028969 | orchestrator | Sunday 22 June 2025 19:56:44 +0000 (0:00:00.325) 0:05:47.465 *********** 2025-06-22 20:02:26.028974 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.028978 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.028983 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.028992 | orchestrator | 2025-06-22 20:02:26.028996 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-22 20:02:26.029001 | orchestrator | Sunday 22 June 2025 19:56:44 +0000 (0:00:00.305) 0:05:47.771 *********** 2025-06-22 20:02:26.029006 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.029011 | orchestrator | 2025-06-22 20:02:26.029016 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-22 20:02:26.029020 | orchestrator | Sunday 22 June 2025 19:56:45 +0000 (0:00:00.761) 0:05:48.532 *********** 2025-06-22 20:02:26.029025 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.029030 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.029035 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.029040 | orchestrator | 2025-06-22 20:02:26.029045 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-22 20:02:26.029063 | orchestrator | Sunday 22 June 2025 19:56:45 +0000 (0:00:00.330) 0:05:48.862 *********** 2025-06-22 20:02:26.029069 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.029074 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.029079 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.029083 | orchestrator | 2025-06-22 20:02:26.029088 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-22 20:02:26.029093 | orchestrator | Sunday 22 June 2025 19:56:45 +0000 (0:00:00.304) 0:05:49.167 *********** 2025-06-22 20:02:26.029098 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.029103 | orchestrator | 2025-06-22 20:02:26.029107 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-22 20:02:26.029112 | orchestrator | Sunday 22 June 2025 19:56:46 +0000 (0:00:00.756) 0:05:49.924 *********** 2025-06-22 20:02:26.029117 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.029122 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.029126 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.029148 | orchestrator | 2025-06-22 20:02:26.029153 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-22 20:02:26.029158 | orchestrator | Sunday 22 June 2025 19:56:47 +0000 (0:00:01.264) 0:05:51.188 *********** 2025-06-22 20:02:26.029162 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.029167 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.029172 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.029177 | orchestrator | 2025-06-22 20:02:26.029182 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-22 20:02:26.029186 | orchestrator | Sunday 22 June 2025 19:56:49 +0000 (0:00:01.288) 0:05:52.476 *********** 2025-06-22 20:02:26.029194 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.029199 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.029204 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.029209 | orchestrator | 2025-06-22 20:02:26.029214 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-22 20:02:26.029218 | orchestrator | Sunday 22 June 2025 19:56:51 +0000 (0:00:02.137) 0:05:54.614 *********** 2025-06-22 20:02:26.029223 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.029228 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.029233 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.029238 | orchestrator | 2025-06-22 20:02:26.029243 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-22 20:02:26.029248 | orchestrator | Sunday 22 June 2025 19:56:53 +0000 (0:00:02.006) 0:05:56.620 *********** 2025-06-22 20:02:26.029252 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.029257 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.029262 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-22 20:02:26.029267 | orchestrator | 2025-06-22 20:02:26.029271 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-22 20:02:26.029280 | orchestrator | Sunday 22 June 2025 19:56:53 +0000 (0:00:00.427) 0:05:57.047 *********** 2025-06-22 20:02:26.029285 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-22 20:02:26.029290 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-22 20:02:26.029295 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-22 20:02:26.029300 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-22 20:02:26.029305 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-22 20:02:26.029310 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-06-22 20:02:26.029315 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:02:26.029319 | orchestrator | 2025-06-22 20:02:26.029324 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-22 20:02:26.029330 | orchestrator | Sunday 22 June 2025 19:57:30 +0000 (0:00:36.535) 0:06:33.583 *********** 2025-06-22 20:02:26.029334 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:02:26.029339 | orchestrator | 2025-06-22 20:02:26.029344 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-22 20:02:26.029349 | orchestrator | Sunday 22 June 2025 19:57:31 +0000 (0:00:01.284) 0:06:34.868 *********** 2025-06-22 20:02:26.029354 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.029358 | orchestrator | 2025-06-22 20:02:26.029363 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-22 20:02:26.029368 | orchestrator | Sunday 22 June 2025 19:57:32 +0000 (0:00:00.514) 0:06:35.382 *********** 2025-06-22 20:02:26.029373 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.029378 | orchestrator | 2025-06-22 20:02:26.029382 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-22 20:02:26.029387 | orchestrator | Sunday 22 June 2025 19:57:32 +0000 (0:00:00.160) 0:06:35.542 *********** 2025-06-22 20:02:26.029392 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-22 20:02:26.029397 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-22 20:02:26.029402 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-22 20:02:26.029406 | orchestrator | 2025-06-22 20:02:26.029411 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-22 20:02:26.029416 | orchestrator | Sunday 22 June 2025 19:57:39 +0000 (0:00:06.804) 0:06:42.347 *********** 2025-06-22 20:02:26.029421 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-22 20:02:26.029440 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-22 20:02:26.029446 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-22 20:02:26.029450 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-22 20:02:26.029455 | orchestrator | 2025-06-22 20:02:26.029460 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 20:02:26.029465 | orchestrator | Sunday 22 June 2025 19:57:44 +0000 (0:00:05.165) 0:06:47.513 *********** 2025-06-22 20:02:26.029470 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.029474 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.029479 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.029484 | orchestrator | 2025-06-22 20:02:26.029489 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-22 20:02:26.029494 | orchestrator | Sunday 22 June 2025 19:57:45 +0000 (0:00:00.894) 0:06:48.407 *********** 2025-06-22 20:02:26.029499 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.029507 | orchestrator | 2025-06-22 20:02:26.029512 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-22 20:02:26.029516 | orchestrator | Sunday 22 June 2025 19:57:45 +0000 (0:00:00.514) 0:06:48.922 *********** 2025-06-22 20:02:26.029521 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.029526 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.029531 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.029535 | orchestrator | 2025-06-22 20:02:26.029540 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-22 20:02:26.029545 | orchestrator | Sunday 22 June 2025 19:57:46 +0000 (0:00:00.362) 0:06:49.285 *********** 2025-06-22 20:02:26.029552 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.029557 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.029562 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.029567 | orchestrator | 2025-06-22 20:02:26.029571 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-22 20:02:26.029576 | orchestrator | Sunday 22 June 2025 19:57:47 +0000 (0:00:01.476) 0:06:50.761 *********** 2025-06-22 20:02:26.029581 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 20:02:26.029586 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 20:02:26.029591 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 20:02:26.029595 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.029600 | orchestrator | 2025-06-22 20:02:26.029605 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-22 20:02:26.029610 | orchestrator | Sunday 22 June 2025 19:57:48 +0000 (0:00:00.620) 0:06:51.382 *********** 2025-06-22 20:02:26.029614 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.029619 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.029624 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.029629 | orchestrator | 2025-06-22 20:02:26.029634 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-22 20:02:26.029638 | orchestrator | 2025-06-22 20:02:26.029643 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:02:26.029648 | orchestrator | Sunday 22 June 2025 19:57:48 +0000 (0:00:00.553) 0:06:51.936 *********** 2025-06-22 20:02:26.029653 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.029658 | orchestrator | 2025-06-22 20:02:26.029662 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:02:26.029667 | orchestrator | Sunday 22 June 2025 19:57:49 +0000 (0:00:00.719) 0:06:52.655 *********** 2025-06-22 20:02:26.029672 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.029677 | orchestrator | 2025-06-22 20:02:26.029682 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:02:26.029686 | orchestrator | Sunday 22 June 2025 19:57:49 +0000 (0:00:00.562) 0:06:53.217 *********** 2025-06-22 20:02:26.029691 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.029696 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.029701 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.029705 | orchestrator | 2025-06-22 20:02:26.029710 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:02:26.029715 | orchestrator | Sunday 22 June 2025 19:57:50 +0000 (0:00:00.284) 0:06:53.501 *********** 2025-06-22 20:02:26.029720 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.029724 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.029729 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.029734 | orchestrator | 2025-06-22 20:02:26.029739 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:02:26.029744 | orchestrator | Sunday 22 June 2025 19:57:51 +0000 (0:00:01.019) 0:06:54.520 *********** 2025-06-22 20:02:26.029752 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.029757 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.029762 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.029766 | orchestrator | 2025-06-22 20:02:26.029771 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:02:26.029776 | orchestrator | Sunday 22 June 2025 19:57:52 +0000 (0:00:00.729) 0:06:55.250 *********** 2025-06-22 20:02:26.029781 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.029786 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.029790 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.029795 | orchestrator | 2025-06-22 20:02:26.029800 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:02:26.029804 | orchestrator | Sunday 22 June 2025 19:57:52 +0000 (0:00:00.670) 0:06:55.921 *********** 2025-06-22 20:02:26.029809 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.029814 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.029819 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.029824 | orchestrator | 2025-06-22 20:02:26.029828 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:02:26.029833 | orchestrator | Sunday 22 June 2025 19:57:53 +0000 (0:00:00.329) 0:06:56.250 *********** 2025-06-22 20:02:26.029852 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.029857 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.029862 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.029867 | orchestrator | 2025-06-22 20:02:26.029872 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:02:26.029877 | orchestrator | Sunday 22 June 2025 19:57:53 +0000 (0:00:00.575) 0:06:56.825 *********** 2025-06-22 20:02:26.029882 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.029886 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.029891 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.029896 | orchestrator | 2025-06-22 20:02:26.029901 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:02:26.029906 | orchestrator | Sunday 22 June 2025 19:57:53 +0000 (0:00:00.319) 0:06:57.145 *********** 2025-06-22 20:02:26.029910 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.029915 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.029920 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.029925 | orchestrator | 2025-06-22 20:02:26.029929 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:02:26.029934 | orchestrator | Sunday 22 June 2025 19:57:54 +0000 (0:00:00.772) 0:06:57.918 *********** 2025-06-22 20:02:26.029939 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.029944 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.029949 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.029953 | orchestrator | 2025-06-22 20:02:26.029958 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:02:26.029963 | orchestrator | Sunday 22 June 2025 19:57:55 +0000 (0:00:00.698) 0:06:58.616 *********** 2025-06-22 20:02:26.029968 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.029973 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.029977 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.029982 | orchestrator | 2025-06-22 20:02:26.029990 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:02:26.029995 | orchestrator | Sunday 22 June 2025 19:57:55 +0000 (0:00:00.605) 0:06:59.222 *********** 2025-06-22 20:02:26.030000 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.030004 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.030009 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.030031 | orchestrator | 2025-06-22 20:02:26.030037 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:02:26.030042 | orchestrator | Sunday 22 June 2025 19:57:56 +0000 (0:00:00.351) 0:06:59.573 *********** 2025-06-22 20:02:26.030047 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.030052 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.030060 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.030065 | orchestrator | 2025-06-22 20:02:26.030070 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:02:26.030075 | orchestrator | Sunday 22 June 2025 19:57:56 +0000 (0:00:00.325) 0:06:59.898 *********** 2025-06-22 20:02:26.030080 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.030084 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.030089 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.030094 | orchestrator | 2025-06-22 20:02:26.030099 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:02:26.030104 | orchestrator | Sunday 22 June 2025 19:57:57 +0000 (0:00:00.392) 0:07:00.291 *********** 2025-06-22 20:02:26.030108 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.030113 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.030118 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.030123 | orchestrator | 2025-06-22 20:02:26.030140 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:02:26.030148 | orchestrator | Sunday 22 June 2025 19:57:57 +0000 (0:00:00.562) 0:07:00.853 *********** 2025-06-22 20:02:26.030156 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.030164 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.030171 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.030180 | orchestrator | 2025-06-22 20:02:26.030186 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:02:26.030190 | orchestrator | Sunday 22 June 2025 19:57:57 +0000 (0:00:00.319) 0:07:01.173 *********** 2025-06-22 20:02:26.030195 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.030200 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.030205 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.030209 | orchestrator | 2025-06-22 20:02:26.030214 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:02:26.030219 | orchestrator | Sunday 22 June 2025 19:57:58 +0000 (0:00:00.334) 0:07:01.508 *********** 2025-06-22 20:02:26.030224 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.030229 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.030233 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.030238 | orchestrator | 2025-06-22 20:02:26.030243 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:02:26.030248 | orchestrator | Sunday 22 June 2025 19:57:58 +0000 (0:00:00.291) 0:07:01.800 *********** 2025-06-22 20:02:26.030252 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.030257 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.030262 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.030267 | orchestrator | 2025-06-22 20:02:26.030271 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:02:26.030276 | orchestrator | Sunday 22 June 2025 19:57:59 +0000 (0:00:00.625) 0:07:02.425 *********** 2025-06-22 20:02:26.030281 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.030286 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.030291 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.030295 | orchestrator | 2025-06-22 20:02:26.030300 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-22 20:02:26.030305 | orchestrator | Sunday 22 June 2025 19:57:59 +0000 (0:00:00.648) 0:07:03.074 *********** 2025-06-22 20:02:26.030310 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.030314 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.030319 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.030324 | orchestrator | 2025-06-22 20:02:26.030329 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-22 20:02:26.030333 | orchestrator | Sunday 22 June 2025 19:58:00 +0000 (0:00:00.366) 0:07:03.441 *********** 2025-06-22 20:02:26.030338 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:02:26.030346 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:02:26.030355 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:02:26.030360 | orchestrator | 2025-06-22 20:02:26.030365 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-22 20:02:26.030370 | orchestrator | Sunday 22 June 2025 19:58:01 +0000 (0:00:00.919) 0:07:04.360 *********** 2025-06-22 20:02:26.030375 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.030379 | orchestrator | 2025-06-22 20:02:26.030384 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-22 20:02:26.030389 | orchestrator | Sunday 22 June 2025 19:58:01 +0000 (0:00:00.650) 0:07:05.010 *********** 2025-06-22 20:02:26.030394 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.030399 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.030404 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.030408 | orchestrator | 2025-06-22 20:02:26.030413 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-22 20:02:26.030418 | orchestrator | Sunday 22 June 2025 19:58:02 +0000 (0:00:00.261) 0:07:05.272 *********** 2025-06-22 20:02:26.030423 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.030428 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.030433 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.030437 | orchestrator | 2025-06-22 20:02:26.030442 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-22 20:02:26.030447 | orchestrator | Sunday 22 June 2025 19:58:02 +0000 (0:00:00.254) 0:07:05.526 *********** 2025-06-22 20:02:26.030454 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.030459 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.030464 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.030469 | orchestrator | 2025-06-22 20:02:26.030474 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-22 20:02:26.030479 | orchestrator | Sunday 22 June 2025 19:58:02 +0000 (0:00:00.715) 0:07:06.242 *********** 2025-06-22 20:02:26.030483 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.030488 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.030493 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.030498 | orchestrator | 2025-06-22 20:02:26.030502 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-22 20:02:26.030507 | orchestrator | Sunday 22 June 2025 19:58:03 +0000 (0:00:00.318) 0:07:06.560 *********** 2025-06-22 20:02:26.030512 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-22 20:02:26.030517 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-22 20:02:26.030522 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-22 20:02:26.030527 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-22 20:02:26.030531 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-22 20:02:26.030536 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-22 20:02:26.030541 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-22 20:02:26.030546 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-22 20:02:26.030551 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-22 20:02:26.030556 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-22 20:02:26.030560 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-22 20:02:26.030565 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-22 20:02:26.030570 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-22 20:02:26.030578 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-22 20:02:26.030583 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-22 20:02:26.030588 | orchestrator | 2025-06-22 20:02:26.030592 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-22 20:02:26.030597 | orchestrator | Sunday 22 June 2025 19:58:07 +0000 (0:00:03.981) 0:07:10.542 *********** 2025-06-22 20:02:26.030602 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.030607 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.030611 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.030616 | orchestrator | 2025-06-22 20:02:26.030621 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-22 20:02:26.030626 | orchestrator | Sunday 22 June 2025 19:58:07 +0000 (0:00:00.278) 0:07:10.821 *********** 2025-06-22 20:02:26.030631 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.030635 | orchestrator | 2025-06-22 20:02:26.030640 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-22 20:02:26.030645 | orchestrator | Sunday 22 June 2025 19:58:08 +0000 (0:00:00.618) 0:07:11.439 *********** 2025-06-22 20:02:26.030650 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-22 20:02:26.030655 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-22 20:02:26.030660 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-22 20:02:26.030664 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-22 20:02:26.030673 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-22 20:02:26.030678 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-22 20:02:26.030683 | orchestrator | 2025-06-22 20:02:26.030688 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-22 20:02:26.030693 | orchestrator | Sunday 22 June 2025 19:58:09 +0000 (0:00:01.005) 0:07:12.444 *********** 2025-06-22 20:02:26.030698 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.030703 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 20:02:26.030707 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:02:26.030712 | orchestrator | 2025-06-22 20:02:26.030717 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-22 20:02:26.030722 | orchestrator | Sunday 22 June 2025 19:58:11 +0000 (0:00:02.176) 0:07:14.620 *********** 2025-06-22 20:02:26.030727 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:02:26.030731 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 20:02:26.030736 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.030741 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:02:26.030746 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-22 20:02:26.030751 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.030756 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:02:26.030761 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-22 20:02:26.030765 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.030770 | orchestrator | 2025-06-22 20:02:26.030775 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-22 20:02:26.030780 | orchestrator | Sunday 22 June 2025 19:58:12 +0000 (0:00:01.176) 0:07:15.797 *********** 2025-06-22 20:02:26.030785 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:02:26.030790 | orchestrator | 2025-06-22 20:02:26.030794 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-22 20:02:26.030799 | orchestrator | Sunday 22 June 2025 19:58:15 +0000 (0:00:02.724) 0:07:18.522 *********** 2025-06-22 20:02:26.030804 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.030812 | orchestrator | 2025-06-22 20:02:26.030817 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-22 20:02:26.030822 | orchestrator | Sunday 22 June 2025 19:58:15 +0000 (0:00:00.533) 0:07:19.056 *********** 2025-06-22 20:02:26.030827 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-b2f14396-315c-50f9-a6a7-8817318b41c3', 'data_vg': 'ceph-b2f14396-315c-50f9-a6a7-8817318b41c3'}) 2025-06-22 20:02:26.030832 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-988500a7-3c26-5f89-b599-1c63900dc902', 'data_vg': 'ceph-988500a7-3c26-5f89-b599-1c63900dc902'}) 2025-06-22 20:02:26.030837 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-809c9636-3d83-5d3b-8a98-356a4387ae79', 'data_vg': 'ceph-809c9636-3d83-5d3b-8a98-356a4387ae79'}) 2025-06-22 20:02:26.030842 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-f1623286-8630-50a6-960f-aa7fe8c22ac9', 'data_vg': 'ceph-f1623286-8630-50a6-960f-aa7fe8c22ac9'}) 2025-06-22 20:02:26.030847 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-60bbbdec-af53-55ad-b293-31f676104815', 'data_vg': 'ceph-60bbbdec-af53-55ad-b293-31f676104815'}) 2025-06-22 20:02:26.030852 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e', 'data_vg': 'ceph-0f31c53c-bcdf-5bd2-bfc5-d0de6e74979e'}) 2025-06-22 20:02:26.030857 | orchestrator | 2025-06-22 20:02:26.030861 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-22 20:02:26.030866 | orchestrator | Sunday 22 June 2025 19:59:02 +0000 (0:00:47.000) 0:08:06.056 *********** 2025-06-22 20:02:26.030871 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.030876 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.030881 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.030885 | orchestrator | 2025-06-22 20:02:26.030890 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-22 20:02:26.030895 | orchestrator | Sunday 22 June 2025 19:59:03 +0000 (0:00:00.545) 0:08:06.602 *********** 2025-06-22 20:02:26.030900 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.030905 | orchestrator | 2025-06-22 20:02:26.030910 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-22 20:02:26.030914 | orchestrator | Sunday 22 June 2025 19:59:03 +0000 (0:00:00.526) 0:08:07.129 *********** 2025-06-22 20:02:26.030919 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.030924 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.030929 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.030934 | orchestrator | 2025-06-22 20:02:26.030938 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-22 20:02:26.031001 | orchestrator | Sunday 22 June 2025 19:59:04 +0000 (0:00:00.667) 0:08:07.796 *********** 2025-06-22 20:02:26.031015 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.031020 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.031024 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.031029 | orchestrator | 2025-06-22 20:02:26.031034 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-22 20:02:26.031039 | orchestrator | Sunday 22 June 2025 19:59:07 +0000 (0:00:03.064) 0:08:10.861 *********** 2025-06-22 20:02:26.031044 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.031049 | orchestrator | 2025-06-22 20:02:26.031057 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-22 20:02:26.031062 | orchestrator | Sunday 22 June 2025 19:59:08 +0000 (0:00:00.515) 0:08:11.376 *********** 2025-06-22 20:02:26.031067 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.031072 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.031077 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.031082 | orchestrator | 2025-06-22 20:02:26.031087 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-22 20:02:26.031095 | orchestrator | Sunday 22 June 2025 19:59:09 +0000 (0:00:01.189) 0:08:12.565 *********** 2025-06-22 20:02:26.031100 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.031105 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.031110 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.031115 | orchestrator | 2025-06-22 20:02:26.031119 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-22 20:02:26.031124 | orchestrator | Sunday 22 June 2025 19:59:10 +0000 (0:00:01.122) 0:08:13.688 *********** 2025-06-22 20:02:26.031163 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.031169 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.031174 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.031179 | orchestrator | 2025-06-22 20:02:26.031183 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-22 20:02:26.031188 | orchestrator | Sunday 22 June 2025 19:59:12 +0000 (0:00:02.074) 0:08:15.763 *********** 2025-06-22 20:02:26.031193 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031198 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.031203 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.031207 | orchestrator | 2025-06-22 20:02:26.031212 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-22 20:02:26.031220 | orchestrator | Sunday 22 June 2025 19:59:12 +0000 (0:00:00.320) 0:08:16.083 *********** 2025-06-22 20:02:26.031225 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031230 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.031235 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.031240 | orchestrator | 2025-06-22 20:02:26.031244 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-22 20:02:26.031249 | orchestrator | Sunday 22 June 2025 19:59:13 +0000 (0:00:00.338) 0:08:16.421 *********** 2025-06-22 20:02:26.031254 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-22 20:02:26.031259 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-06-22 20:02:26.031264 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-06-22 20:02:26.031268 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-06-22 20:02:26.031273 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 20:02:26.031278 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-22 20:02:26.031283 | orchestrator | 2025-06-22 20:02:26.031287 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-22 20:02:26.031292 | orchestrator | Sunday 22 June 2025 19:59:14 +0000 (0:00:01.024) 0:08:17.446 *********** 2025-06-22 20:02:26.031297 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-22 20:02:26.031302 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-22 20:02:26.031307 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-06-22 20:02:26.031312 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-22 20:02:26.031317 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-06-22 20:02:26.031321 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-22 20:02:26.031326 | orchestrator | 2025-06-22 20:02:26.031331 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-22 20:02:26.031336 | orchestrator | Sunday 22 June 2025 19:59:16 +0000 (0:00:02.488) 0:08:19.934 *********** 2025-06-22 20:02:26.031341 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-22 20:02:26.031346 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-06-22 20:02:26.031350 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-22 20:02:26.031355 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-22 20:02:26.031360 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-22 20:02:26.031365 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-06-22 20:02:26.031369 | orchestrator | 2025-06-22 20:02:26.031374 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-22 20:02:26.031379 | orchestrator | Sunday 22 June 2025 19:59:20 +0000 (0:00:03.954) 0:08:23.889 *********** 2025-06-22 20:02:26.031387 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031392 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.031397 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:02:26.031402 | orchestrator | 2025-06-22 20:02:26.031407 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-22 20:02:26.031412 | orchestrator | Sunday 22 June 2025 19:59:23 +0000 (0:00:02.431) 0:08:26.321 *********** 2025-06-22 20:02:26.031416 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031421 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.031426 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-22 20:02:26.031431 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:02:26.031436 | orchestrator | 2025-06-22 20:02:26.031440 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-22 20:02:26.031445 | orchestrator | Sunday 22 June 2025 19:59:36 +0000 (0:00:13.108) 0:08:39.430 *********** 2025-06-22 20:02:26.031450 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031455 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.031460 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.031464 | orchestrator | 2025-06-22 20:02:26.031469 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 20:02:26.031474 | orchestrator | Sunday 22 June 2025 19:59:37 +0000 (0:00:01.146) 0:08:40.577 *********** 2025-06-22 20:02:26.031479 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031484 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.031489 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.031494 | orchestrator | 2025-06-22 20:02:26.031498 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-22 20:02:26.031505 | orchestrator | Sunday 22 June 2025 19:59:37 +0000 (0:00:00.317) 0:08:40.894 *********** 2025-06-22 20:02:26.031510 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.031515 | orchestrator | 2025-06-22 20:02:26.031520 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-22 20:02:26.031524 | orchestrator | Sunday 22 June 2025 19:59:38 +0000 (0:00:00.790) 0:08:41.684 *********** 2025-06-22 20:02:26.031529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.031534 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.031538 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.031543 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031547 | orchestrator | 2025-06-22 20:02:26.031552 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-22 20:02:26.031556 | orchestrator | Sunday 22 June 2025 19:59:38 +0000 (0:00:00.396) 0:08:42.081 *********** 2025-06-22 20:02:26.031561 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031565 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.031570 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.031574 | orchestrator | 2025-06-22 20:02:26.031579 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-22 20:02:26.031583 | orchestrator | Sunday 22 June 2025 19:59:39 +0000 (0:00:00.290) 0:08:42.371 *********** 2025-06-22 20:02:26.031588 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031592 | orchestrator | 2025-06-22 20:02:26.031597 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-22 20:02:26.031604 | orchestrator | Sunday 22 June 2025 19:59:39 +0000 (0:00:00.196) 0:08:42.568 *********** 2025-06-22 20:02:26.031609 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031613 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.031618 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.031622 | orchestrator | 2025-06-22 20:02:26.031627 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-22 20:02:26.031635 | orchestrator | Sunday 22 June 2025 19:59:39 +0000 (0:00:00.432) 0:08:43.001 *********** 2025-06-22 20:02:26.031639 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031644 | orchestrator | 2025-06-22 20:02:26.031649 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-22 20:02:26.031653 | orchestrator | Sunday 22 June 2025 19:59:39 +0000 (0:00:00.210) 0:08:43.211 *********** 2025-06-22 20:02:26.031658 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031662 | orchestrator | 2025-06-22 20:02:26.031667 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-22 20:02:26.031671 | orchestrator | Sunday 22 June 2025 19:59:40 +0000 (0:00:00.205) 0:08:43.417 *********** 2025-06-22 20:02:26.031676 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031680 | orchestrator | 2025-06-22 20:02:26.031685 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-22 20:02:26.031689 | orchestrator | Sunday 22 June 2025 19:59:40 +0000 (0:00:00.110) 0:08:43.528 *********** 2025-06-22 20:02:26.031694 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031698 | orchestrator | 2025-06-22 20:02:26.031703 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-22 20:02:26.031708 | orchestrator | Sunday 22 June 2025 19:59:40 +0000 (0:00:00.189) 0:08:43.717 *********** 2025-06-22 20:02:26.031712 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031717 | orchestrator | 2025-06-22 20:02:26.031721 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-22 20:02:26.031726 | orchestrator | Sunday 22 June 2025 19:59:40 +0000 (0:00:00.209) 0:08:43.927 *********** 2025-06-22 20:02:26.031730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.031735 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.031740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.031744 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031749 | orchestrator | 2025-06-22 20:02:26.031753 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-22 20:02:26.031758 | orchestrator | Sunday 22 June 2025 19:59:41 +0000 (0:00:00.378) 0:08:44.305 *********** 2025-06-22 20:02:26.031762 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031767 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.031771 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.031776 | orchestrator | 2025-06-22 20:02:26.031780 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-22 20:02:26.031785 | orchestrator | Sunday 22 June 2025 19:59:41 +0000 (0:00:00.309) 0:08:44.615 *********** 2025-06-22 20:02:26.031789 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031794 | orchestrator | 2025-06-22 20:02:26.031798 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-22 20:02:26.031803 | orchestrator | Sunday 22 June 2025 19:59:41 +0000 (0:00:00.237) 0:08:44.853 *********** 2025-06-22 20:02:26.031808 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031812 | orchestrator | 2025-06-22 20:02:26.031817 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-22 20:02:26.031821 | orchestrator | 2025-06-22 20:02:26.031826 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:02:26.031830 | orchestrator | Sunday 22 June 2025 19:59:42 +0000 (0:00:01.184) 0:08:46.037 *********** 2025-06-22 20:02:26.031835 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.031840 | orchestrator | 2025-06-22 20:02:26.031845 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:02:26.031850 | orchestrator | Sunday 22 June 2025 19:59:44 +0000 (0:00:01.241) 0:08:47.279 *********** 2025-06-22 20:02:26.031857 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.031865 | orchestrator | 2025-06-22 20:02:26.031870 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:02:26.031874 | orchestrator | Sunday 22 June 2025 19:59:45 +0000 (0:00:01.211) 0:08:48.491 *********** 2025-06-22 20:02:26.031879 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.031884 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.031888 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.031893 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.031897 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.031902 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.031906 | orchestrator | 2025-06-22 20:02:26.031911 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:02:26.031916 | orchestrator | Sunday 22 June 2025 19:59:46 +0000 (0:00:01.257) 0:08:49.749 *********** 2025-06-22 20:02:26.031920 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.031925 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.031929 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.031934 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.031938 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.031943 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.031947 | orchestrator | 2025-06-22 20:02:26.031952 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:02:26.031957 | orchestrator | Sunday 22 June 2025 19:59:47 +0000 (0:00:00.746) 0:08:50.495 *********** 2025-06-22 20:02:26.031961 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.031966 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.031970 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.031975 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.031979 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.031987 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.031991 | orchestrator | 2025-06-22 20:02:26.031996 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:02:26.032000 | orchestrator | Sunday 22 June 2025 19:59:48 +0000 (0:00:00.819) 0:08:51.315 *********** 2025-06-22 20:02:26.032005 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.032010 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.032014 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.032019 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.032023 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.032028 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.032032 | orchestrator | 2025-06-22 20:02:26.032037 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:02:26.032041 | orchestrator | Sunday 22 June 2025 19:59:48 +0000 (0:00:00.700) 0:08:52.016 *********** 2025-06-22 20:02:26.032046 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.032050 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.032055 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.032059 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.032064 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.032068 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.032073 | orchestrator | 2025-06-22 20:02:26.032078 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:02:26.032082 | orchestrator | Sunday 22 June 2025 19:59:49 +0000 (0:00:01.174) 0:08:53.190 *********** 2025-06-22 20:02:26.032087 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.032091 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.032096 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.032100 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.032105 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.032109 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.032114 | orchestrator | 2025-06-22 20:02:26.032119 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:02:26.032140 | orchestrator | Sunday 22 June 2025 19:59:50 +0000 (0:00:00.624) 0:08:53.815 *********** 2025-06-22 20:02:26.032147 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.032151 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.032156 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.032161 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.032165 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.032170 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.032174 | orchestrator | 2025-06-22 20:02:26.032179 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:02:26.032183 | orchestrator | Sunday 22 June 2025 19:59:51 +0000 (0:00:00.667) 0:08:54.482 *********** 2025-06-22 20:02:26.032188 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.032192 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.032197 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.032201 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.032206 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.032210 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.032215 | orchestrator | 2025-06-22 20:02:26.032219 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:02:26.032224 | orchestrator | Sunday 22 June 2025 19:59:52 +0000 (0:00:00.935) 0:08:55.418 *********** 2025-06-22 20:02:26.032228 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.032233 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.032237 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.032242 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.032246 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.032251 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.032255 | orchestrator | 2025-06-22 20:02:26.032260 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:02:26.032264 | orchestrator | Sunday 22 June 2025 19:59:53 +0000 (0:00:01.072) 0:08:56.491 *********** 2025-06-22 20:02:26.032269 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.032274 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.032278 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.032283 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.032287 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.032292 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.032296 | orchestrator | 2025-06-22 20:02:26.032301 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:02:26.032305 | orchestrator | Sunday 22 June 2025 19:59:53 +0000 (0:00:00.517) 0:08:57.009 *********** 2025-06-22 20:02:26.032310 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.032314 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.032319 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.032326 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.032331 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.032335 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.032340 | orchestrator | 2025-06-22 20:02:26.032345 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:02:26.032349 | orchestrator | Sunday 22 June 2025 19:59:54 +0000 (0:00:00.537) 0:08:57.546 *********** 2025-06-22 20:02:26.032354 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.032358 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.032363 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.032367 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.032372 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.032376 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.032381 | orchestrator | 2025-06-22 20:02:26.032386 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:02:26.032390 | orchestrator | Sunday 22 June 2025 19:59:54 +0000 (0:00:00.693) 0:08:58.240 *********** 2025-06-22 20:02:26.032395 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.032399 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.032404 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.032412 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.032417 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.032421 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.032426 | orchestrator | 2025-06-22 20:02:26.032430 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:02:26.032435 | orchestrator | Sunday 22 June 2025 19:59:55 +0000 (0:00:00.521) 0:08:58.761 *********** 2025-06-22 20:02:26.032439 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.032444 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.032449 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.032453 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.032460 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.032465 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.032469 | orchestrator | 2025-06-22 20:02:26.032474 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:02:26.032479 | orchestrator | Sunday 22 June 2025 19:59:56 +0000 (0:00:00.721) 0:08:59.483 *********** 2025-06-22 20:02:26.032483 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.032488 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.032492 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.032497 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.032501 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.032506 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.032511 | orchestrator | 2025-06-22 20:02:26.032515 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:02:26.032520 | orchestrator | Sunday 22 June 2025 19:59:56 +0000 (0:00:00.541) 0:09:00.024 *********** 2025-06-22 20:02:26.032524 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.032529 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.032533 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.032538 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.032542 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.032547 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.032552 | orchestrator | 2025-06-22 20:02:26.032556 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:02:26.032561 | orchestrator | Sunday 22 June 2025 19:59:57 +0000 (0:00:00.869) 0:09:00.894 *********** 2025-06-22 20:02:26.032565 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.032570 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.032574 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.032579 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.032584 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.032588 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.032593 | orchestrator | 2025-06-22 20:02:26.032597 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:02:26.032602 | orchestrator | Sunday 22 June 2025 19:59:58 +0000 (0:00:00.609) 0:09:01.503 *********** 2025-06-22 20:02:26.032606 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.032611 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.032616 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.032620 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.032625 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.032629 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.032634 | orchestrator | 2025-06-22 20:02:26.032638 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:02:26.032643 | orchestrator | Sunday 22 June 2025 19:59:58 +0000 (0:00:00.736) 0:09:02.240 *********** 2025-06-22 20:02:26.032647 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.032652 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.032656 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.032661 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.032665 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.032670 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.032674 | orchestrator | 2025-06-22 20:02:26.032679 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-22 20:02:26.032687 | orchestrator | Sunday 22 June 2025 20:00:00 +0000 (0:00:01.049) 0:09:03.289 *********** 2025-06-22 20:02:26.032691 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:02:26.032696 | orchestrator | 2025-06-22 20:02:26.032700 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-22 20:02:26.032705 | orchestrator | Sunday 22 June 2025 20:00:04 +0000 (0:00:04.161) 0:09:07.451 *********** 2025-06-22 20:02:26.032710 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:02:26.032714 | orchestrator | 2025-06-22 20:02:26.032719 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-22 20:02:26.032723 | orchestrator | Sunday 22 June 2025 20:00:06 +0000 (0:00:02.047) 0:09:09.499 *********** 2025-06-22 20:02:26.032728 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.032733 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.032737 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.032742 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.032746 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.032751 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.032755 | orchestrator | 2025-06-22 20:02:26.032760 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-22 20:02:26.032765 | orchestrator | Sunday 22 June 2025 20:00:08 +0000 (0:00:01.798) 0:09:11.297 *********** 2025-06-22 20:02:26.032772 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.032776 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.032781 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.032785 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.032790 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.032794 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.032799 | orchestrator | 2025-06-22 20:02:26.032804 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-22 20:02:26.032808 | orchestrator | Sunday 22 June 2025 20:00:09 +0000 (0:00:01.043) 0:09:12.341 *********** 2025-06-22 20:02:26.032813 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.032818 | orchestrator | 2025-06-22 20:02:26.032822 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-22 20:02:26.032827 | orchestrator | Sunday 22 June 2025 20:00:10 +0000 (0:00:01.159) 0:09:13.500 *********** 2025-06-22 20:02:26.032832 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.032836 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.032841 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.032845 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.032850 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.032854 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.032859 | orchestrator | 2025-06-22 20:02:26.032863 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-22 20:02:26.032868 | orchestrator | Sunday 22 June 2025 20:00:12 +0000 (0:00:01.820) 0:09:15.321 *********** 2025-06-22 20:02:26.032873 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.032877 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.032885 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.032889 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.032894 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.032898 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.032903 | orchestrator | 2025-06-22 20:02:26.032907 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-22 20:02:26.032912 | orchestrator | Sunday 22 June 2025 20:00:15 +0000 (0:00:03.779) 0:09:19.101 *********** 2025-06-22 20:02:26.032917 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.032921 | orchestrator | 2025-06-22 20:02:26.032929 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-22 20:02:26.032934 | orchestrator | Sunday 22 June 2025 20:00:17 +0000 (0:00:01.803) 0:09:20.904 *********** 2025-06-22 20:02:26.032939 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.032943 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.032948 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.032952 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.032957 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.032962 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.032966 | orchestrator | 2025-06-22 20:02:26.032971 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-22 20:02:26.032975 | orchestrator | Sunday 22 June 2025 20:00:18 +0000 (0:00:01.125) 0:09:22.030 *********** 2025-06-22 20:02:26.032980 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.032984 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.032989 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.032993 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.032998 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.033002 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.033007 | orchestrator | 2025-06-22 20:02:26.033011 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-22 20:02:26.033016 | orchestrator | Sunday 22 June 2025 20:00:21 +0000 (0:00:02.883) 0:09:24.914 *********** 2025-06-22 20:02:26.033020 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.033025 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.033029 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.033034 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.033038 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.033043 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.033047 | orchestrator | 2025-06-22 20:02:26.033052 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-22 20:02:26.033057 | orchestrator | 2025-06-22 20:02:26.033061 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:02:26.033066 | orchestrator | Sunday 22 June 2025 20:00:22 +0000 (0:00:01.212) 0:09:26.127 *********** 2025-06-22 20:02:26.033070 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.033075 | orchestrator | 2025-06-22 20:02:26.033079 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:02:26.033084 | orchestrator | Sunday 22 June 2025 20:00:23 +0000 (0:00:00.477) 0:09:26.605 *********** 2025-06-22 20:02:26.033089 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.033093 | orchestrator | 2025-06-22 20:02:26.033098 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:02:26.033102 | orchestrator | Sunday 22 June 2025 20:00:24 +0000 (0:00:00.699) 0:09:27.304 *********** 2025-06-22 20:02:26.033107 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.033112 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.033116 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.033121 | orchestrator | 2025-06-22 20:02:26.033125 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:02:26.033144 | orchestrator | Sunday 22 June 2025 20:00:24 +0000 (0:00:00.315) 0:09:27.620 *********** 2025-06-22 20:02:26.033148 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.033153 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.033158 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.033162 | orchestrator | 2025-06-22 20:02:26.033167 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:02:26.033174 | orchestrator | Sunday 22 June 2025 20:00:25 +0000 (0:00:00.697) 0:09:28.318 *********** 2025-06-22 20:02:26.033179 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.033183 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.033188 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.033198 | orchestrator | 2025-06-22 20:02:26.033202 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:02:26.033207 | orchestrator | Sunday 22 June 2025 20:00:26 +0000 (0:00:01.084) 0:09:29.402 *********** 2025-06-22 20:02:26.033212 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.033216 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.033221 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.033225 | orchestrator | 2025-06-22 20:02:26.033230 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:02:26.033235 | orchestrator | Sunday 22 June 2025 20:00:26 +0000 (0:00:00.780) 0:09:30.182 *********** 2025-06-22 20:02:26.033239 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.033244 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.033248 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.033253 | orchestrator | 2025-06-22 20:02:26.033258 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:02:26.033262 | orchestrator | Sunday 22 June 2025 20:00:27 +0000 (0:00:00.318) 0:09:30.501 *********** 2025-06-22 20:02:26.033267 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.033271 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.033276 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.033280 | orchestrator | 2025-06-22 20:02:26.033285 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:02:26.033290 | orchestrator | Sunday 22 June 2025 20:00:27 +0000 (0:00:00.310) 0:09:30.812 *********** 2025-06-22 20:02:26.033294 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.033299 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.033306 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.033310 | orchestrator | 2025-06-22 20:02:26.033315 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:02:26.033320 | orchestrator | Sunday 22 June 2025 20:00:28 +0000 (0:00:00.564) 0:09:31.376 *********** 2025-06-22 20:02:26.033324 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.033329 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.033333 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.033338 | orchestrator | 2025-06-22 20:02:26.033343 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:02:26.033347 | orchestrator | Sunday 22 June 2025 20:00:28 +0000 (0:00:00.793) 0:09:32.169 *********** 2025-06-22 20:02:26.033352 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.033356 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.033361 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.033365 | orchestrator | 2025-06-22 20:02:26.033370 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:02:26.033375 | orchestrator | Sunday 22 June 2025 20:00:29 +0000 (0:00:00.755) 0:09:32.924 *********** 2025-06-22 20:02:26.033379 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.033384 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.033388 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.033393 | orchestrator | 2025-06-22 20:02:26.033398 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:02:26.033402 | orchestrator | Sunday 22 June 2025 20:00:29 +0000 (0:00:00.303) 0:09:33.228 *********** 2025-06-22 20:02:26.033407 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.033411 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.033416 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.033420 | orchestrator | 2025-06-22 20:02:26.033425 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:02:26.033430 | orchestrator | Sunday 22 June 2025 20:00:30 +0000 (0:00:00.587) 0:09:33.816 *********** 2025-06-22 20:02:26.033434 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.033439 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.033443 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.033448 | orchestrator | 2025-06-22 20:02:26.033453 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:02:26.033460 | orchestrator | Sunday 22 June 2025 20:00:30 +0000 (0:00:00.375) 0:09:34.191 *********** 2025-06-22 20:02:26.033465 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.033470 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.033474 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.033479 | orchestrator | 2025-06-22 20:02:26.033484 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:02:26.033488 | orchestrator | Sunday 22 June 2025 20:00:31 +0000 (0:00:00.357) 0:09:34.548 *********** 2025-06-22 20:02:26.033493 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.033497 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.033502 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.033507 | orchestrator | 2025-06-22 20:02:26.033511 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:02:26.033516 | orchestrator | Sunday 22 June 2025 20:00:31 +0000 (0:00:00.366) 0:09:34.915 *********** 2025-06-22 20:02:26.033520 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.033525 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.033529 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.033534 | orchestrator | 2025-06-22 20:02:26.033538 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:02:26.033543 | orchestrator | Sunday 22 June 2025 20:00:32 +0000 (0:00:00.569) 0:09:35.485 *********** 2025-06-22 20:02:26.033548 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.033552 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.033557 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.033561 | orchestrator | 2025-06-22 20:02:26.033566 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:02:26.033570 | orchestrator | Sunday 22 June 2025 20:00:32 +0000 (0:00:00.323) 0:09:35.809 *********** 2025-06-22 20:02:26.033575 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.033579 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.033584 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.033589 | orchestrator | 2025-06-22 20:02:26.033593 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:02:26.033598 | orchestrator | Sunday 22 June 2025 20:00:32 +0000 (0:00:00.313) 0:09:36.122 *********** 2025-06-22 20:02:26.033602 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.033609 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.033614 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.033618 | orchestrator | 2025-06-22 20:02:26.033623 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:02:26.033628 | orchestrator | Sunday 22 June 2025 20:00:33 +0000 (0:00:00.350) 0:09:36.473 *********** 2025-06-22 20:02:26.033632 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.033637 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.033641 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.033646 | orchestrator | 2025-06-22 20:02:26.033650 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-22 20:02:26.033655 | orchestrator | Sunday 22 June 2025 20:00:34 +0000 (0:00:00.792) 0:09:37.265 *********** 2025-06-22 20:02:26.033660 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.033664 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.033669 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-22 20:02:26.033673 | orchestrator | 2025-06-22 20:02:26.033678 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-22 20:02:26.033683 | orchestrator | Sunday 22 June 2025 20:00:34 +0000 (0:00:00.427) 0:09:37.693 *********** 2025-06-22 20:02:26.033687 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:02:26.033692 | orchestrator | 2025-06-22 20:02:26.033696 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-22 20:02:26.033701 | orchestrator | Sunday 22 June 2025 20:00:36 +0000 (0:00:02.469) 0:09:40.163 *********** 2025-06-22 20:02:26.033712 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-22 20:02:26.033719 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.033723 | orchestrator | 2025-06-22 20:02:26.033728 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-22 20:02:26.033732 | orchestrator | Sunday 22 June 2025 20:00:37 +0000 (0:00:00.211) 0:09:40.374 *********** 2025-06-22 20:02:26.033738 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:02:26.033744 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:02:26.033748 | orchestrator | 2025-06-22 20:02:26.033753 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-22 20:02:26.033757 | orchestrator | Sunday 22 June 2025 20:00:44 +0000 (0:00:07.515) 0:09:47.890 *********** 2025-06-22 20:02:26.033762 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:02:26.033766 | orchestrator | 2025-06-22 20:02:26.033771 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-22 20:02:26.033775 | orchestrator | Sunday 22 June 2025 20:00:49 +0000 (0:00:04.388) 0:09:52.278 *********** 2025-06-22 20:02:26.033780 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.033785 | orchestrator | 2025-06-22 20:02:26.033790 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-22 20:02:26.033794 | orchestrator | Sunday 22 June 2025 20:00:49 +0000 (0:00:00.537) 0:09:52.816 *********** 2025-06-22 20:02:26.033799 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-22 20:02:26.033803 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-22 20:02:26.033808 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-22 20:02:26.033812 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-22 20:02:26.033817 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-22 20:02:26.033821 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-22 20:02:26.033826 | orchestrator | 2025-06-22 20:02:26.033830 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-22 20:02:26.033835 | orchestrator | Sunday 22 June 2025 20:00:50 +0000 (0:00:01.071) 0:09:53.887 *********** 2025-06-22 20:02:26.033839 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.033844 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 20:02:26.033848 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:02:26.033853 | orchestrator | 2025-06-22 20:02:26.033858 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-22 20:02:26.033862 | orchestrator | Sunday 22 June 2025 20:00:53 +0000 (0:00:02.384) 0:09:56.272 *********** 2025-06-22 20:02:26.033867 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:02:26.033871 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 20:02:26.033876 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.033880 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:02:26.033885 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-22 20:02:26.033889 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.033897 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:02:26.033902 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-22 20:02:26.033909 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.033914 | orchestrator | 2025-06-22 20:02:26.033918 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-22 20:02:26.033923 | orchestrator | Sunday 22 June 2025 20:00:54 +0000 (0:00:01.645) 0:09:57.918 *********** 2025-06-22 20:02:26.033928 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.033932 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.033937 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.033941 | orchestrator | 2025-06-22 20:02:26.033946 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-22 20:02:26.033950 | orchestrator | Sunday 22 June 2025 20:00:57 +0000 (0:00:02.769) 0:10:00.687 *********** 2025-06-22 20:02:26.033955 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.033959 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.033964 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.033969 | orchestrator | 2025-06-22 20:02:26.033973 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-22 20:02:26.033978 | orchestrator | Sunday 22 June 2025 20:00:57 +0000 (0:00:00.317) 0:10:01.005 *********** 2025-06-22 20:02:26.033982 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.033987 | orchestrator | 2025-06-22 20:02:26.033991 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-22 20:02:26.033996 | orchestrator | Sunday 22 June 2025 20:00:58 +0000 (0:00:00.769) 0:10:01.774 *********** 2025-06-22 20:02:26.034001 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.034005 | orchestrator | 2025-06-22 20:02:26.034037 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-22 20:02:26.034043 | orchestrator | Sunday 22 June 2025 20:00:58 +0000 (0:00:00.463) 0:10:02.238 *********** 2025-06-22 20:02:26.034048 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.034052 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.034057 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.034061 | orchestrator | 2025-06-22 20:02:26.034066 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-22 20:02:26.034071 | orchestrator | Sunday 22 June 2025 20:01:00 +0000 (0:00:01.195) 0:10:03.433 *********** 2025-06-22 20:02:26.034075 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.034080 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.034084 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.034089 | orchestrator | 2025-06-22 20:02:26.034093 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-22 20:02:26.034098 | orchestrator | Sunday 22 June 2025 20:01:01 +0000 (0:00:01.269) 0:10:04.702 *********** 2025-06-22 20:02:26.034103 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.034107 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.034112 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.034116 | orchestrator | 2025-06-22 20:02:26.034121 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-22 20:02:26.034125 | orchestrator | Sunday 22 June 2025 20:01:03 +0000 (0:00:01.833) 0:10:06.535 *********** 2025-06-22 20:02:26.034153 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.034158 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.034162 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.034167 | orchestrator | 2025-06-22 20:02:26.034172 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-22 20:02:26.034176 | orchestrator | Sunday 22 June 2025 20:01:05 +0000 (0:00:02.089) 0:10:08.625 *********** 2025-06-22 20:02:26.034181 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034186 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034194 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034199 | orchestrator | 2025-06-22 20:02:26.034203 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 20:02:26.034208 | orchestrator | Sunday 22 June 2025 20:01:06 +0000 (0:00:01.420) 0:10:10.045 *********** 2025-06-22 20:02:26.034213 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.034217 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.034222 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.034226 | orchestrator | 2025-06-22 20:02:26.034231 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-22 20:02:26.034236 | orchestrator | Sunday 22 June 2025 20:01:07 +0000 (0:00:00.702) 0:10:10.748 *********** 2025-06-22 20:02:26.034240 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.034245 | orchestrator | 2025-06-22 20:02:26.034250 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-22 20:02:26.034254 | orchestrator | Sunday 22 June 2025 20:01:08 +0000 (0:00:00.736) 0:10:11.484 *********** 2025-06-22 20:02:26.034259 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034263 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034268 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034272 | orchestrator | 2025-06-22 20:02:26.034277 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-22 20:02:26.034282 | orchestrator | Sunday 22 June 2025 20:01:08 +0000 (0:00:00.332) 0:10:11.817 *********** 2025-06-22 20:02:26.034286 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.034291 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.034295 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.034299 | orchestrator | 2025-06-22 20:02:26.034303 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-22 20:02:26.034307 | orchestrator | Sunday 22 June 2025 20:01:09 +0000 (0:00:01.290) 0:10:13.107 *********** 2025-06-22 20:02:26.034312 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.034316 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.034320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.034324 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.034328 | orchestrator | 2025-06-22 20:02:26.034332 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-22 20:02:26.034339 | orchestrator | Sunday 22 June 2025 20:01:10 +0000 (0:00:00.877) 0:10:13.985 *********** 2025-06-22 20:02:26.034344 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034348 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034352 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034356 | orchestrator | 2025-06-22 20:02:26.034361 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-22 20:02:26.034365 | orchestrator | 2025-06-22 20:02:26.034369 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:02:26.034373 | orchestrator | Sunday 22 June 2025 20:01:11 +0000 (0:00:00.852) 0:10:14.837 *********** 2025-06-22 20:02:26.034377 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.034382 | orchestrator | 2025-06-22 20:02:26.034386 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:02:26.034390 | orchestrator | Sunday 22 June 2025 20:01:12 +0000 (0:00:00.504) 0:10:15.341 *********** 2025-06-22 20:02:26.034394 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.034398 | orchestrator | 2025-06-22 20:02:26.034402 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:02:26.034407 | orchestrator | Sunday 22 June 2025 20:01:12 +0000 (0:00:00.713) 0:10:16.054 *********** 2025-06-22 20:02:26.034411 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.034418 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.034422 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.034426 | orchestrator | 2025-06-22 20:02:26.034433 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:02:26.034438 | orchestrator | Sunday 22 June 2025 20:01:13 +0000 (0:00:00.313) 0:10:16.368 *********** 2025-06-22 20:02:26.034442 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034446 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034450 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034454 | orchestrator | 2025-06-22 20:02:26.034458 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:02:26.034463 | orchestrator | Sunday 22 June 2025 20:01:13 +0000 (0:00:00.714) 0:10:17.083 *********** 2025-06-22 20:02:26.034467 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034471 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034475 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034479 | orchestrator | 2025-06-22 20:02:26.034483 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:02:26.034488 | orchestrator | Sunday 22 June 2025 20:01:14 +0000 (0:00:00.731) 0:10:17.814 *********** 2025-06-22 20:02:26.034492 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034496 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034500 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034504 | orchestrator | 2025-06-22 20:02:26.034509 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:02:26.034513 | orchestrator | Sunday 22 June 2025 20:01:15 +0000 (0:00:01.070) 0:10:18.885 *********** 2025-06-22 20:02:26.034517 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.034521 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.034525 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.034530 | orchestrator | 2025-06-22 20:02:26.034534 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:02:26.034538 | orchestrator | Sunday 22 June 2025 20:01:15 +0000 (0:00:00.326) 0:10:19.212 *********** 2025-06-22 20:02:26.034542 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.034546 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.034550 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.034555 | orchestrator | 2025-06-22 20:02:26.034559 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:02:26.034563 | orchestrator | Sunday 22 June 2025 20:01:16 +0000 (0:00:00.310) 0:10:19.523 *********** 2025-06-22 20:02:26.034567 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.034571 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.034575 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.034579 | orchestrator | 2025-06-22 20:02:26.034584 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:02:26.034588 | orchestrator | Sunday 22 June 2025 20:01:16 +0000 (0:00:00.306) 0:10:19.829 *********** 2025-06-22 20:02:26.034592 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034596 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034600 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034604 | orchestrator | 2025-06-22 20:02:26.034609 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:02:26.034613 | orchestrator | Sunday 22 June 2025 20:01:17 +0000 (0:00:01.009) 0:10:20.838 *********** 2025-06-22 20:02:26.034617 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034621 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034625 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034629 | orchestrator | 2025-06-22 20:02:26.034633 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:02:26.034638 | orchestrator | Sunday 22 June 2025 20:01:18 +0000 (0:00:00.743) 0:10:21.582 *********** 2025-06-22 20:02:26.034642 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.034646 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.034650 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.034658 | orchestrator | 2025-06-22 20:02:26.034663 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:02:26.034667 | orchestrator | Sunday 22 June 2025 20:01:18 +0000 (0:00:00.332) 0:10:21.915 *********** 2025-06-22 20:02:26.034671 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.034675 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.034679 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.034683 | orchestrator | 2025-06-22 20:02:26.034688 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:02:26.034692 | orchestrator | Sunday 22 June 2025 20:01:18 +0000 (0:00:00.315) 0:10:22.231 *********** 2025-06-22 20:02:26.034696 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034700 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034704 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034709 | orchestrator | 2025-06-22 20:02:26.034715 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:02:26.034719 | orchestrator | Sunday 22 June 2025 20:01:19 +0000 (0:00:00.648) 0:10:22.879 *********** 2025-06-22 20:02:26.034723 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034728 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034732 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034736 | orchestrator | 2025-06-22 20:02:26.034740 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:02:26.034744 | orchestrator | Sunday 22 June 2025 20:01:19 +0000 (0:00:00.330) 0:10:23.210 *********** 2025-06-22 20:02:26.034748 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034753 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034757 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034761 | orchestrator | 2025-06-22 20:02:26.034765 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:02:26.034769 | orchestrator | Sunday 22 June 2025 20:01:20 +0000 (0:00:00.323) 0:10:23.533 *********** 2025-06-22 20:02:26.034773 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.034777 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.034782 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.034786 | orchestrator | 2025-06-22 20:02:26.034790 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:02:26.034794 | orchestrator | Sunday 22 June 2025 20:01:20 +0000 (0:00:00.311) 0:10:23.845 *********** 2025-06-22 20:02:26.034798 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.034802 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.034807 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.034811 | orchestrator | 2025-06-22 20:02:26.034815 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:02:26.034821 | orchestrator | Sunday 22 June 2025 20:01:21 +0000 (0:00:00.561) 0:10:24.406 *********** 2025-06-22 20:02:26.034826 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.034830 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.034834 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.034838 | orchestrator | 2025-06-22 20:02:26.034842 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:02:26.034846 | orchestrator | Sunday 22 June 2025 20:01:21 +0000 (0:00:00.321) 0:10:24.727 *********** 2025-06-22 20:02:26.034851 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034855 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034859 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034863 | orchestrator | 2025-06-22 20:02:26.034867 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:02:26.034871 | orchestrator | Sunday 22 June 2025 20:01:21 +0000 (0:00:00.321) 0:10:25.049 *********** 2025-06-22 20:02:26.034876 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.034880 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.034884 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.034888 | orchestrator | 2025-06-22 20:02:26.034892 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-22 20:02:26.034909 | orchestrator | Sunday 22 June 2025 20:01:22 +0000 (0:00:00.773) 0:10:25.822 *********** 2025-06-22 20:02:26.034914 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.034918 | orchestrator | 2025-06-22 20:02:26.034922 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-22 20:02:26.034926 | orchestrator | Sunday 22 June 2025 20:01:23 +0000 (0:00:00.564) 0:10:26.387 *********** 2025-06-22 20:02:26.034930 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.034935 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 20:02:26.034939 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:02:26.034943 | orchestrator | 2025-06-22 20:02:26.034947 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-22 20:02:26.034951 | orchestrator | Sunday 22 June 2025 20:01:25 +0000 (0:00:02.622) 0:10:29.009 *********** 2025-06-22 20:02:26.034956 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:02:26.034960 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 20:02:26.034964 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.034968 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:02:26.034972 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-22 20:02:26.034976 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.034980 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:02:26.034985 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-22 20:02:26.034989 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.034993 | orchestrator | 2025-06-22 20:02:26.034997 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-22 20:02:26.035001 | orchestrator | Sunday 22 June 2025 20:01:26 +0000 (0:00:01.231) 0:10:30.241 *********** 2025-06-22 20:02:26.035005 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.035009 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.035014 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.035018 | orchestrator | 2025-06-22 20:02:26.035022 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-22 20:02:26.035026 | orchestrator | Sunday 22 June 2025 20:01:27 +0000 (0:00:00.743) 0:10:30.984 *********** 2025-06-22 20:02:26.035030 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.035034 | orchestrator | 2025-06-22 20:02:26.035039 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-22 20:02:26.035043 | orchestrator | Sunday 22 June 2025 20:01:28 +0000 (0:00:00.682) 0:10:31.667 *********** 2025-06-22 20:02:26.035047 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.035053 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.035058 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.035062 | orchestrator | 2025-06-22 20:02:26.035067 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-22 20:02:26.035071 | orchestrator | Sunday 22 June 2025 20:01:29 +0000 (0:00:00.850) 0:10:32.518 *********** 2025-06-22 20:02:26.035075 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.035079 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-22 20:02:26.035084 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.035091 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-22 20:02:26.035095 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.035099 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-22 20:02:26.035103 | orchestrator | 2025-06-22 20:02:26.035107 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-22 20:02:26.035114 | orchestrator | Sunday 22 June 2025 20:01:34 +0000 (0:00:04.944) 0:10:37.462 *********** 2025-06-22 20:02:26.035118 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.035122 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:02:26.035138 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.035142 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:02:26.035147 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:02:26.035151 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:02:26.035155 | orchestrator | 2025-06-22 20:02:26.035159 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-22 20:02:26.035163 | orchestrator | Sunday 22 June 2025 20:01:36 +0000 (0:00:02.485) 0:10:39.948 *********** 2025-06-22 20:02:26.035167 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:02:26.035172 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.035176 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:02:26.035180 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.035184 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:02:26.035188 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.035193 | orchestrator | 2025-06-22 20:02:26.035197 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-22 20:02:26.035201 | orchestrator | Sunday 22 June 2025 20:01:37 +0000 (0:00:01.228) 0:10:41.177 *********** 2025-06-22 20:02:26.035205 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-22 20:02:26.035209 | orchestrator | 2025-06-22 20:02:26.035214 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-22 20:02:26.035218 | orchestrator | Sunday 22 June 2025 20:01:38 +0000 (0:00:00.237) 0:10:41.415 *********** 2025-06-22 20:02:26.035222 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:02:26.035226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:02:26.035231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:02:26.035235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:02:26.035239 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:02:26.035243 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.035248 | orchestrator | 2025-06-22 20:02:26.035252 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-22 20:02:26.035256 | orchestrator | Sunday 22 June 2025 20:01:39 +0000 (0:00:00.840) 0:10:42.255 *********** 2025-06-22 20:02:26.035260 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:02:26.035264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:02:26.035276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:02:26.035280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:02:26.035284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:02:26.035289 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.035293 | orchestrator | 2025-06-22 20:02:26.035299 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-22 20:02:26.035304 | orchestrator | Sunday 22 June 2025 20:01:40 +0000 (0:00:01.063) 0:10:43.319 *********** 2025-06-22 20:02:26.035308 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 20:02:26.035312 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 20:02:26.035316 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 20:02:26.035321 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 20:02:26.035325 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 20:02:26.035329 | orchestrator | 2025-06-22 20:02:26.035333 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-22 20:02:26.035340 | orchestrator | Sunday 22 June 2025 20:02:11 +0000 (0:00:31.198) 0:11:14.518 *********** 2025-06-22 20:02:26.035344 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.035349 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.035353 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.035357 | orchestrator | 2025-06-22 20:02:26.035361 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-22 20:02:26.035365 | orchestrator | Sunday 22 June 2025 20:02:11 +0000 (0:00:00.345) 0:11:14.864 *********** 2025-06-22 20:02:26.035369 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.035374 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.035378 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.035382 | orchestrator | 2025-06-22 20:02:26.035386 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-22 20:02:26.035390 | orchestrator | Sunday 22 June 2025 20:02:11 +0000 (0:00:00.311) 0:11:15.175 *********** 2025-06-22 20:02:26.035394 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.035398 | orchestrator | 2025-06-22 20:02:26.035403 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-22 20:02:26.035407 | orchestrator | Sunday 22 June 2025 20:02:12 +0000 (0:00:00.742) 0:11:15.918 *********** 2025-06-22 20:02:26.035411 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.035415 | orchestrator | 2025-06-22 20:02:26.035419 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-22 20:02:26.035423 | orchestrator | Sunday 22 June 2025 20:02:13 +0000 (0:00:00.547) 0:11:16.466 *********** 2025-06-22 20:02:26.035427 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.035432 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.035436 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.035440 | orchestrator | 2025-06-22 20:02:26.035444 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-22 20:02:26.035452 | orchestrator | Sunday 22 June 2025 20:02:14 +0000 (0:00:01.223) 0:11:17.689 *********** 2025-06-22 20:02:26.035456 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.035460 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.035464 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.035469 | orchestrator | 2025-06-22 20:02:26.035473 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-22 20:02:26.035477 | orchestrator | Sunday 22 June 2025 20:02:15 +0000 (0:00:01.455) 0:11:19.144 *********** 2025-06-22 20:02:26.035481 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:02:26.035485 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:02:26.035489 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:02:26.035493 | orchestrator | 2025-06-22 20:02:26.035498 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-22 20:02:26.035502 | orchestrator | Sunday 22 June 2025 20:02:17 +0000 (0:00:01.797) 0:11:20.942 *********** 2025-06-22 20:02:26.035506 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.035510 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.035514 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 20:02:26.035519 | orchestrator | 2025-06-22 20:02:26.035523 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 20:02:26.035527 | orchestrator | Sunday 22 June 2025 20:02:20 +0000 (0:00:02.648) 0:11:23.590 *********** 2025-06-22 20:02:26.035531 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.035535 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.035539 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.035543 | orchestrator | 2025-06-22 20:02:26.035548 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-22 20:02:26.035552 | orchestrator | Sunday 22 June 2025 20:02:20 +0000 (0:00:00.346) 0:11:23.937 *********** 2025-06-22 20:02:26.035556 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:02:26.035560 | orchestrator | 2025-06-22 20:02:26.035566 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-22 20:02:26.035570 | orchestrator | Sunday 22 June 2025 20:02:21 +0000 (0:00:00.505) 0:11:24.442 *********** 2025-06-22 20:02:26.035574 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.035579 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.035583 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.035587 | orchestrator | 2025-06-22 20:02:26.035591 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-22 20:02:26.035595 | orchestrator | Sunday 22 June 2025 20:02:21 +0000 (0:00:00.530) 0:11:24.973 *********** 2025-06-22 20:02:26.035599 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.035604 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:02:26.035608 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:02:26.035612 | orchestrator | 2025-06-22 20:02:26.035616 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-22 20:02:26.035620 | orchestrator | Sunday 22 June 2025 20:02:22 +0000 (0:00:00.348) 0:11:25.322 *********** 2025-06-22 20:02:26.035624 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:02:26.035629 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:02:26.035633 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:02:26.035637 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:02:26.035641 | orchestrator | 2025-06-22 20:02:26.035645 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-22 20:02:26.035649 | orchestrator | Sunday 22 June 2025 20:02:22 +0000 (0:00:00.587) 0:11:25.909 *********** 2025-06-22 20:02:26.035656 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:02:26.035661 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:02:26.035667 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:02:26.035671 | orchestrator | 2025-06-22 20:02:26.035676 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:02:26.035680 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-06-22 20:02:26.035684 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-22 20:02:26.035688 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-22 20:02:26.035693 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-06-22 20:02:26.035697 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-22 20:02:26.035701 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-22 20:02:26.035705 | orchestrator | 2025-06-22 20:02:26.035710 | orchestrator | 2025-06-22 20:02:26.035714 | orchestrator | 2025-06-22 20:02:26.035718 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:02:26.035722 | orchestrator | Sunday 22 June 2025 20:02:22 +0000 (0:00:00.249) 0:11:26.158 *********** 2025-06-22 20:02:26.035726 | orchestrator | =============================================================================== 2025-06-22 20:02:26.035731 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 74.54s 2025-06-22 20:02:26.035735 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 47.00s 2025-06-22 20:02:26.035739 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.54s 2025-06-22 20:02:26.035743 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.20s 2025-06-22 20:02:26.035747 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.00s 2025-06-22 20:02:26.035752 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 16.58s 2025-06-22 20:02:26.035756 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.11s 2025-06-22 20:02:26.035760 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.92s 2025-06-22 20:02:26.035764 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.58s 2025-06-22 20:02:26.035768 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.52s 2025-06-22 20:02:26.035772 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.95s 2025-06-22 20:02:26.035777 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.80s 2025-06-22 20:02:26.035781 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.17s 2025-06-22 20:02:26.035785 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.94s 2025-06-22 20:02:26.035789 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 4.39s 2025-06-22 20:02:26.035793 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.24s 2025-06-22 20:02:26.035797 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.16s 2025-06-22 20:02:26.035802 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 3.98s 2025-06-22 20:02:26.035806 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.95s 2025-06-22 20:02:26.035810 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.88s 2025-06-22 20:02:26.035819 | orchestrator | 2025-06-22 20:02:26 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:02:26.035824 | orchestrator | 2025-06-22 20:02:26 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:26.035828 | orchestrator | 2025-06-22 20:02:26 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:26.035832 | orchestrator | 2025-06-22 20:02:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:29.060477 | orchestrator | 2025-06-22 20:02:29 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:02:29.061327 | orchestrator | 2025-06-22 20:02:29 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:29.062562 | orchestrator | 2025-06-22 20:02:29 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:29.062600 | orchestrator | 2025-06-22 20:02:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:32.115616 | orchestrator | 2025-06-22 20:02:32 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:02:32.116755 | orchestrator | 2025-06-22 20:02:32 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:32.118651 | orchestrator | 2025-06-22 20:02:32 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:32.119033 | orchestrator | 2025-06-22 20:02:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:35.162273 | orchestrator | 2025-06-22 20:02:35 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:02:35.163042 | orchestrator | 2025-06-22 20:02:35 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:35.164571 | orchestrator | 2025-06-22 20:02:35 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:35.164698 | orchestrator | 2025-06-22 20:02:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:38.200734 | orchestrator | 2025-06-22 20:02:38 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:02:38.202215 | orchestrator | 2025-06-22 20:02:38 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:38.204931 | orchestrator | 2025-06-22 20:02:38 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:38.204974 | orchestrator | 2025-06-22 20:02:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:41.243987 | orchestrator | 2025-06-22 20:02:41 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:02:41.244496 | orchestrator | 2025-06-22 20:02:41 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:41.246665 | orchestrator | 2025-06-22 20:02:41 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:41.246726 | orchestrator | 2025-06-22 20:02:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:44.290388 | orchestrator | 2025-06-22 20:02:44 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:02:44.291173 | orchestrator | 2025-06-22 20:02:44 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:44.293627 | orchestrator | 2025-06-22 20:02:44 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state STARTED 2025-06-22 20:02:44.293692 | orchestrator | 2025-06-22 20:02:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:47.347841 | orchestrator | 2025-06-22 20:02:47 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:02:47.347979 | orchestrator | 2025-06-22 20:02:47 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:47.348009 | orchestrator | 2025-06-22 20:02:47 | INFO  | Task 4cdbd8b2-b004-4c0b-b2e2-3ad8fe2851d0 is in state SUCCESS 2025-06-22 20:02:47.349521 | orchestrator | 2025-06-22 20:02:47.349572 | orchestrator | 2025-06-22 20:02:47.349585 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:02:47.349598 | orchestrator | 2025-06-22 20:02:47.349609 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:02:47.349621 | orchestrator | Sunday 22 June 2025 19:59:51 +0000 (0:00:00.235) 0:00:00.235 *********** 2025-06-22 20:02:47.349633 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:47.349645 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:47.349656 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:47.349667 | orchestrator | 2025-06-22 20:02:47.349678 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:02:47.349689 | orchestrator | Sunday 22 June 2025 19:59:51 +0000 (0:00:00.242) 0:00:00.478 *********** 2025-06-22 20:02:47.349701 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-22 20:02:47.349712 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-22 20:02:47.349723 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-22 20:02:47.349734 | orchestrator | 2025-06-22 20:02:47.349745 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-22 20:02:47.349756 | orchestrator | 2025-06-22 20:02:47.349767 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 20:02:47.349779 | orchestrator | Sunday 22 June 2025 19:59:52 +0000 (0:00:00.349) 0:00:00.827 *********** 2025-06-22 20:02:47.349790 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:47.349801 | orchestrator | 2025-06-22 20:02:47.349837 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-22 20:02:47.349849 | orchestrator | Sunday 22 June 2025 19:59:52 +0000 (0:00:00.384) 0:00:01.211 *********** 2025-06-22 20:02:47.349860 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 20:02:47.349871 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 20:02:47.349882 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 20:02:47.349892 | orchestrator | 2025-06-22 20:02:47.349903 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-22 20:02:47.349914 | orchestrator | Sunday 22 June 2025 19:59:53 +0000 (0:00:00.620) 0:00:01.831 *********** 2025-06-22 20:02:47.349946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:02:47.349963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:02:47.350001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:02:47.350070 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:02:47.350094 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:02:47.350108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:02:47.350154 | orchestrator | 2025-06-22 20:02:47.350167 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 20:02:47.350180 | orchestrator | Sunday 22 June 2025 19:59:54 +0000 (0:00:01.439) 0:00:03.271 *********** 2025-06-22 20:02:47.350193 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:47.350205 | orchestrator | 2025-06-22 20:02:47.350217 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-22 20:02:47.350230 | orchestrator | Sunday 22 June 2025 19:59:55 +0000 (0:00:00.508) 0:00:03.779 *********** 2025-06-22 20:02:47.350253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:02:47.350268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:02:47.350287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:02:47.350301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:02:47.350341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:02:47.350366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:02:47.350397 | orchestrator | 2025-06-22 20:02:47.350420 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-22 20:02:47.350437 | orchestrator | Sunday 22 June 2025 19:59:57 +0000 (0:00:02.566) 0:00:06.346 *********** 2025-06-22 20:02:47.350464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 20:02:47.350485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 20:02:47.350525 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:47.350547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 20:02:47.350573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 20:02:47.350586 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:47.350603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 20:02:47.350620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 20:02:47.350640 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:47.350651 | orchestrator | 2025-06-22 20:02:47.350662 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-22 20:02:47.350673 | orchestrator | Sunday 22 June 2025 19:59:59 +0000 (0:00:01.475) 0:00:07.821 *********** 2025-06-22 20:02:47.350685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 20:02:47.350704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 20:02:47.350716 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:47.350733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 20:02:47.350752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 20:02:47.350763 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:47.350775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 20:02:47.350796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 20:02:47.350808 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:47.350819 | orchestrator | 2025-06-22 20:02:47.350830 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-22 20:02:47.350842 | orchestrator | Sunday 22 June 2025 19:59:59 +0000 (0:00:00.869) 0:00:08.691 *********** 2025-06-22 20:02:47.350858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:02:47.350876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:02:47.350888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:02:47.350909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:02:47.350921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:02:47.350958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:02:47.350970 | orchestrator | 2025-06-22 20:02:47.350981 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-22 20:02:47.350993 | orchestrator | Sunday 22 June 2025 20:00:02 +0000 (0:00:02.269) 0:00:10.960 *********** 2025-06-22 20:02:47.351004 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:47.351015 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:47.351026 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:47.351037 | orchestrator | 2025-06-22 20:02:47.351048 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-22 20:02:47.351059 | orchestrator | Sunday 22 June 2025 20:00:05 +0000 (0:00:03.540) 0:00:14.500 *********** 2025-06-22 20:02:47.351070 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:47.351081 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:47.351091 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:47.351102 | orchestrator | 2025-06-22 20:02:47.351113 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-22 20:02:47.351124 | orchestrator | Sunday 22 June 2025 20:00:07 +0000 (0:00:01.581) 0:00:16.082 *********** 2025-06-22 20:02:47.351161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:02:47.351181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:02:47.351194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:02:47.351219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:02:47.351232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:02:47.351251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:02:47.351264 | orchestrator | 2025-06-22 20:02:47.351275 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 20:02:47.351292 | orchestrator | Sunday 22 June 2025 20:00:09 +0000 (0:00:01.947) 0:00:18.030 *********** 2025-06-22 20:02:47.351303 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:47.351314 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:47.351325 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:47.351336 | orchestrator | 2025-06-22 20:02:47.351347 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-22 20:02:47.351358 | orchestrator | Sunday 22 June 2025 20:00:09 +0000 (0:00:00.294) 0:00:18.324 *********** 2025-06-22 20:02:47.351369 | orchestrator | 2025-06-22 20:02:47.351380 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-22 20:02:47.351391 | orchestrator | Sunday 22 June 2025 20:00:09 +0000 (0:00:00.061) 0:00:18.386 *********** 2025-06-22 20:02:47.351401 | orchestrator | 2025-06-22 20:02:47.351412 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-22 20:02:47.351423 | orchestrator | Sunday 22 June 2025 20:00:09 +0000 (0:00:00.061) 0:00:18.447 *********** 2025-06-22 20:02:47.351434 | orchestrator | 2025-06-22 20:02:47.351445 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-22 20:02:47.351456 | orchestrator | Sunday 22 June 2025 20:00:09 +0000 (0:00:00.180) 0:00:18.628 *********** 2025-06-22 20:02:47.351467 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:47.351477 | orchestrator | 2025-06-22 20:02:47.351493 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-22 20:02:47.351505 | orchestrator | Sunday 22 June 2025 20:00:10 +0000 (0:00:00.213) 0:00:18.841 *********** 2025-06-22 20:02:47.351516 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:47.351526 | orchestrator | 2025-06-22 20:02:47.351537 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-22 20:02:47.351548 | orchestrator | Sunday 22 June 2025 20:00:10 +0000 (0:00:00.188) 0:00:19.030 *********** 2025-06-22 20:02:47.351559 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:47.351570 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:47.351581 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:47.351592 | orchestrator | 2025-06-22 20:02:47.351603 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-22 20:02:47.351614 | orchestrator | Sunday 22 June 2025 20:01:17 +0000 (0:01:07.302) 0:01:26.332 *********** 2025-06-22 20:02:47.351625 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:47.351636 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:47.351647 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:47.351657 | orchestrator | 2025-06-22 20:02:47.351668 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 20:02:47.351679 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:01:17.988) 0:02:44.321 *********** 2025-06-22 20:02:47.351690 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:47.351701 | orchestrator | 2025-06-22 20:02:47.351712 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-22 20:02:47.351723 | orchestrator | Sunday 22 June 2025 20:02:36 +0000 (0:00:00.597) 0:02:44.919 *********** 2025-06-22 20:02:47.351734 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:47.351745 | orchestrator | 2025-06-22 20:02:47.351756 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-22 20:02:47.351767 | orchestrator | Sunday 22 June 2025 20:02:38 +0000 (0:00:02.131) 0:02:47.050 *********** 2025-06-22 20:02:47.351778 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:47.351788 | orchestrator | 2025-06-22 20:02:47.351799 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-22 20:02:47.351810 | orchestrator | Sunday 22 June 2025 20:02:40 +0000 (0:00:01.974) 0:02:49.024 *********** 2025-06-22 20:02:47.351821 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:47.351832 | orchestrator | 2025-06-22 20:02:47.351843 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-22 20:02:47.351860 | orchestrator | Sunday 22 June 2025 20:02:42 +0000 (0:00:02.320) 0:02:51.344 *********** 2025-06-22 20:02:47.351871 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:47.351882 | orchestrator | 2025-06-22 20:02:47.351893 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:02:47.351905 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:02:47.351917 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 20:02:47.351928 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 20:02:47.351939 | orchestrator | 2025-06-22 20:02:47.351950 | orchestrator | 2025-06-22 20:02:47.351961 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:02:47.351977 | orchestrator | Sunday 22 June 2025 20:02:45 +0000 (0:00:02.426) 0:02:53.771 *********** 2025-06-22 20:02:47.351989 | orchestrator | =============================================================================== 2025-06-22 20:02:47.351999 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 77.99s 2025-06-22 20:02:47.352010 | orchestrator | opensearch : Restart opensearch container ------------------------------ 67.30s 2025-06-22 20:02:47.352021 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.54s 2025-06-22 20:02:47.352032 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.57s 2025-06-22 20:02:47.352043 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.43s 2025-06-22 20:02:47.352054 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.32s 2025-06-22 20:02:47.352065 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.27s 2025-06-22 20:02:47.352075 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.13s 2025-06-22 20:02:47.352086 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 1.97s 2025-06-22 20:02:47.352097 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.95s 2025-06-22 20:02:47.352108 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.58s 2025-06-22 20:02:47.352119 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.48s 2025-06-22 20:02:47.352148 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.44s 2025-06-22 20:02:47.352159 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.87s 2025-06-22 20:02:47.352170 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.62s 2025-06-22 20:02:47.352181 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.60s 2025-06-22 20:02:47.352192 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.51s 2025-06-22 20:02:47.352202 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.38s 2025-06-22 20:02:47.352214 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.35s 2025-06-22 20:02:47.352229 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.30s 2025-06-22 20:02:47.352240 | orchestrator | 2025-06-22 20:02:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:50.393230 | orchestrator | 2025-06-22 20:02:50 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:02:50.395825 | orchestrator | 2025-06-22 20:02:50 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:50.395947 | orchestrator | 2025-06-22 20:02:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:53.452431 | orchestrator | 2025-06-22 20:02:53 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:02:53.453559 | orchestrator | 2025-06-22 20:02:53 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:53.453816 | orchestrator | 2025-06-22 20:02:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:56.500701 | orchestrator | 2025-06-22 20:02:56 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:02:56.501362 | orchestrator | 2025-06-22 20:02:56 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state STARTED 2025-06-22 20:02:56.501403 | orchestrator | 2025-06-22 20:02:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:59.560220 | orchestrator | 2025-06-22 20:02:59 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:02:59.563383 | orchestrator | 2025-06-22 20:02:59 | INFO  | Task 7578fef9-cd7c-4fca-9dc3-075f1296ba9b is in state SUCCESS 2025-06-22 20:02:59.565107 | orchestrator | 2025-06-22 20:02:59.565176 | orchestrator | 2025-06-22 20:02:59.565189 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-22 20:02:59.565203 | orchestrator | 2025-06-22 20:02:59.565214 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-22 20:02:59.565226 | orchestrator | Sunday 22 June 2025 19:59:51 +0000 (0:00:00.073) 0:00:00.073 *********** 2025-06-22 20:02:59.565238 | orchestrator | ok: [localhost] => { 2025-06-22 20:02:59.565250 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-22 20:02:59.565262 | orchestrator | } 2025-06-22 20:02:59.565273 | orchestrator | 2025-06-22 20:02:59.565285 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-22 20:02:59.565296 | orchestrator | Sunday 22 June 2025 19:59:51 +0000 (0:00:00.029) 0:00:00.102 *********** 2025-06-22 20:02:59.565307 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-22 20:02:59.565320 | orchestrator | ...ignoring 2025-06-22 20:02:59.565331 | orchestrator | 2025-06-22 20:02:59.565342 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-22 20:02:59.565354 | orchestrator | Sunday 22 June 2025 19:59:54 +0000 (0:00:02.725) 0:00:02.828 *********** 2025-06-22 20:02:59.565365 | orchestrator | skipping: [localhost] 2025-06-22 20:02:59.565376 | orchestrator | 2025-06-22 20:02:59.565388 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-22 20:02:59.565399 | orchestrator | Sunday 22 June 2025 19:59:54 +0000 (0:00:00.041) 0:00:02.869 *********** 2025-06-22 20:02:59.565410 | orchestrator | ok: [localhost] 2025-06-22 20:02:59.565421 | orchestrator | 2025-06-22 20:02:59.565432 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:02:59.565444 | orchestrator | 2025-06-22 20:02:59.565455 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:02:59.565466 | orchestrator | Sunday 22 June 2025 19:59:54 +0000 (0:00:00.131) 0:00:03.000 *********** 2025-06-22 20:02:59.565477 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:59.565488 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:59.565499 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:59.565510 | orchestrator | 2025-06-22 20:02:59.565521 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:02:59.565532 | orchestrator | Sunday 22 June 2025 19:59:54 +0000 (0:00:00.301) 0:00:03.302 *********** 2025-06-22 20:02:59.565543 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-22 20:02:59.565555 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-22 20:02:59.565567 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-22 20:02:59.565577 | orchestrator | 2025-06-22 20:02:59.565589 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-22 20:02:59.565634 | orchestrator | 2025-06-22 20:02:59.565647 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-22 20:02:59.565658 | orchestrator | Sunday 22 June 2025 19:59:55 +0000 (0:00:00.509) 0:00:03.811 *********** 2025-06-22 20:02:59.565669 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 20:02:59.565679 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-22 20:02:59.565690 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-22 20:02:59.565703 | orchestrator | 2025-06-22 20:02:59.565715 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 20:02:59.565728 | orchestrator | Sunday 22 June 2025 19:59:55 +0000 (0:00:00.525) 0:00:04.336 *********** 2025-06-22 20:02:59.565740 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:59.565753 | orchestrator | 2025-06-22 20:02:59.565765 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-22 20:02:59.565791 | orchestrator | Sunday 22 June 2025 19:59:56 +0000 (0:00:00.479) 0:00:04.816 *********** 2025-06-22 20:02:59.565826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:59.565846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:59.565877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:59.565891 | orchestrator | 2025-06-22 20:02:59.565911 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-22 20:02:59.565925 | orchestrator | Sunday 22 June 2025 19:59:59 +0000 (0:00:03.379) 0:00:08.196 *********** 2025-06-22 20:02:59.565938 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.565951 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.565964 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.565976 | orchestrator | 2025-06-22 20:02:59.565989 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-22 20:02:59.566001 | orchestrator | Sunday 22 June 2025 20:00:00 +0000 (0:00:00.731) 0:00:08.928 *********** 2025-06-22 20:02:59.566014 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.566081 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.566093 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.566105 | orchestrator | 2025-06-22 20:02:59.566116 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-22 20:02:59.566202 | orchestrator | Sunday 22 June 2025 20:00:01 +0000 (0:00:01.417) 0:00:10.345 *********** 2025-06-22 20:02:59.566217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:59.566255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:59.566269 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:59.566288 | orchestrator | 2025-06-22 20:02:59.566299 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-22 20:02:59.566310 | orchestrator | Sunday 22 June 2025 20:00:05 +0000 (0:00:03.759) 0:00:14.105 *********** 2025-06-22 20:02:59.566321 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.566331 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.566342 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.566353 | orchestrator | 2025-06-22 20:02:59.566364 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-22 20:02:59.566375 | orchestrator | Sunday 22 June 2025 20:00:06 +0000 (0:00:01.080) 0:00:15.185 *********** 2025-06-22 20:02:59.566385 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.566396 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:59.566407 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:59.566417 | orchestrator | 2025-06-22 20:02:59.566428 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 20:02:59.566444 | orchestrator | Sunday 22 June 2025 20:00:10 +0000 (0:00:04.153) 0:00:19.338 *********** 2025-06-22 20:02:59.566455 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:59.566466 | orchestrator | 2025-06-22 20:02:59.566476 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-22 20:02:59.566487 | orchestrator | Sunday 22 June 2025 20:00:11 +0000 (0:00:00.857) 0:00:20.196 *********** 2025-06-22 20:02:59.566508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:59.566528 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.566540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:59.566552 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.566576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:59.566604 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:59.566615 | orchestrator | 2025-06-22 20:02:59.566626 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-22 20:02:59.566637 | orchestrator | Sunday 22 June 2025 20:00:14 +0000 (0:00:03.247) 0:00:23.444 *********** 2025-06-22 20:02:59.566648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:59.566660 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.566682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:59.566701 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:59.566712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:59.566723 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.566733 | orchestrator | 2025-06-22 20:02:59.566742 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-22 20:02:59.566752 | orchestrator | Sunday 22 June 2025 20:00:17 +0000 (0:00:02.953) 0:00:26.397 *********** 2025-06-22 20:02:59.566766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:59.566783 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.566801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:59.566812 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:59.566826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:59.566837 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.566847 | orchestrator | 2025-06-22 20:02:59.566856 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-22 20:02:59.566872 | orchestrator | Sunday 22 June 2025 20:00:20 +0000 (0:00:03.171) 0:00:29.569 *********** 2025-06-22 20:02:59.566890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:59.566906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:59.566926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:59.566946 | orchestrator | 2025-06-22 20:02:59.566956 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-22 20:02:59.566966 | orchestrator | Sunday 22 June 2025 20:00:23 +0000 (0:00:02.660) 0:00:32.230 *********** 2025-06-22 20:02:59.566976 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.566985 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:59.566995 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:59.567004 | orchestrator | 2025-06-22 20:02:59.567014 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-22 20:02:59.567024 | orchestrator | Sunday 22 June 2025 20:00:24 +0000 (0:00:01.060) 0:00:33.290 *********** 2025-06-22 20:02:59.567034 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:59.567043 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:59.567053 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:59.567062 | orchestrator | 2025-06-22 20:02:59.567072 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-22 20:02:59.567082 | orchestrator | Sunday 22 June 2025 20:00:24 +0000 (0:00:00.420) 0:00:33.711 *********** 2025-06-22 20:02:59.567091 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:59.567101 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:59.567110 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:59.567120 | orchestrator | 2025-06-22 20:02:59.567148 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-22 20:02:59.567158 | orchestrator | Sunday 22 June 2025 20:00:25 +0000 (0:00:00.346) 0:00:34.058 *********** 2025-06-22 20:02:59.567169 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-22 20:02:59.567179 | orchestrator | ...ignoring 2025-06-22 20:02:59.567189 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-22 20:02:59.567199 | orchestrator | ...ignoring 2025-06-22 20:02:59.567208 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-22 20:02:59.567218 | orchestrator | ...ignoring 2025-06-22 20:02:59.567228 | orchestrator | 2025-06-22 20:02:59.567237 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-22 20:02:59.567251 | orchestrator | Sunday 22 June 2025 20:00:36 +0000 (0:00:11.055) 0:00:45.113 *********** 2025-06-22 20:02:59.567267 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:59.567277 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:59.567287 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:59.567296 | orchestrator | 2025-06-22 20:02:59.567306 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-22 20:02:59.567316 | orchestrator | Sunday 22 June 2025 20:00:37 +0000 (0:00:00.697) 0:00:45.811 *********** 2025-06-22 20:02:59.567326 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:59.567335 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.567345 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.567354 | orchestrator | 2025-06-22 20:02:59.567364 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-22 20:02:59.567374 | orchestrator | Sunday 22 June 2025 20:00:37 +0000 (0:00:00.440) 0:00:46.251 *********** 2025-06-22 20:02:59.567383 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:59.567393 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.567403 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.567412 | orchestrator | 2025-06-22 20:02:59.567422 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-22 20:02:59.567431 | orchestrator | Sunday 22 June 2025 20:00:37 +0000 (0:00:00.433) 0:00:46.684 *********** 2025-06-22 20:02:59.567441 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:59.567451 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.567460 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.567470 | orchestrator | 2025-06-22 20:02:59.567479 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-22 20:02:59.567489 | orchestrator | Sunday 22 June 2025 20:00:38 +0000 (0:00:00.419) 0:00:47.104 *********** 2025-06-22 20:02:59.567498 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:59.567508 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:59.567517 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:59.567527 | orchestrator | 2025-06-22 20:02:59.567536 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-22 20:02:59.567546 | orchestrator | Sunday 22 June 2025 20:00:38 +0000 (0:00:00.611) 0:00:47.715 *********** 2025-06-22 20:02:59.567561 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:59.567571 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.567580 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.567590 | orchestrator | 2025-06-22 20:02:59.567600 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 20:02:59.567610 | orchestrator | Sunday 22 June 2025 20:00:39 +0000 (0:00:00.449) 0:00:48.164 *********** 2025-06-22 20:02:59.567619 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.567629 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.567639 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-22 20:02:59.567649 | orchestrator | 2025-06-22 20:02:59.567658 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-22 20:02:59.567668 | orchestrator | Sunday 22 June 2025 20:00:39 +0000 (0:00:00.381) 0:00:48.546 *********** 2025-06-22 20:02:59.567678 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.567687 | orchestrator | 2025-06-22 20:02:59.567697 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-22 20:02:59.567706 | orchestrator | Sunday 22 June 2025 20:00:50 +0000 (0:00:11.074) 0:00:59.621 *********** 2025-06-22 20:02:59.567716 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:59.567726 | orchestrator | 2025-06-22 20:02:59.567736 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 20:02:59.567745 | orchestrator | Sunday 22 June 2025 20:00:51 +0000 (0:00:00.124) 0:00:59.745 *********** 2025-06-22 20:02:59.567755 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:59.567764 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.567774 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.567784 | orchestrator | 2025-06-22 20:02:59.567793 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-22 20:02:59.567808 | orchestrator | Sunday 22 June 2025 20:00:51 +0000 (0:00:00.953) 0:01:00.699 *********** 2025-06-22 20:02:59.567818 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.567828 | orchestrator | 2025-06-22 20:02:59.567837 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-22 20:02:59.567847 | orchestrator | Sunday 22 June 2025 20:00:59 +0000 (0:00:07.647) 0:01:08.347 *********** 2025-06-22 20:02:59.567857 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:59.567866 | orchestrator | 2025-06-22 20:02:59.567876 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-22 20:02:59.567886 | orchestrator | Sunday 22 June 2025 20:01:02 +0000 (0:00:02.550) 0:01:10.898 *********** 2025-06-22 20:02:59.567895 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:59.567905 | orchestrator | 2025-06-22 20:02:59.567915 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-22 20:02:59.567925 | orchestrator | Sunday 22 June 2025 20:01:04 +0000 (0:00:02.485) 0:01:13.383 *********** 2025-06-22 20:02:59.567934 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.567944 | orchestrator | 2025-06-22 20:02:59.567953 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-22 20:02:59.567963 | orchestrator | Sunday 22 June 2025 20:01:04 +0000 (0:00:00.126) 0:01:13.509 *********** 2025-06-22 20:02:59.567973 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:59.567983 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.567992 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.568002 | orchestrator | 2025-06-22 20:02:59.568011 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-22 20:02:59.568021 | orchestrator | Sunday 22 June 2025 20:01:05 +0000 (0:00:00.507) 0:01:14.016 *********** 2025-06-22 20:02:59.568031 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:59.568040 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-22 20:02:59.568050 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:59.568060 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:59.568069 | orchestrator | 2025-06-22 20:02:59.568079 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-22 20:02:59.568088 | orchestrator | skipping: no hosts matched 2025-06-22 20:02:59.568098 | orchestrator | 2025-06-22 20:02:59.568112 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-22 20:02:59.568122 | orchestrator | 2025-06-22 20:02:59.568150 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-22 20:02:59.568160 | orchestrator | Sunday 22 June 2025 20:01:05 +0000 (0:00:00.325) 0:01:14.342 *********** 2025-06-22 20:02:59.568170 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:59.568179 | orchestrator | 2025-06-22 20:02:59.568189 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-22 20:02:59.568199 | orchestrator | Sunday 22 June 2025 20:01:23 +0000 (0:00:18.060) 0:01:32.402 *********** 2025-06-22 20:02:59.568208 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:59.568218 | orchestrator | 2025-06-22 20:02:59.568228 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-22 20:02:59.568238 | orchestrator | Sunday 22 June 2025 20:01:44 +0000 (0:00:20.691) 0:01:53.093 *********** 2025-06-22 20:02:59.568247 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:59.568257 | orchestrator | 2025-06-22 20:02:59.568267 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-22 20:02:59.568276 | orchestrator | 2025-06-22 20:02:59.568286 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-22 20:02:59.568296 | orchestrator | Sunday 22 June 2025 20:01:46 +0000 (0:00:02.505) 0:01:55.599 *********** 2025-06-22 20:02:59.568305 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:59.568315 | orchestrator | 2025-06-22 20:02:59.568325 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-22 20:02:59.568340 | orchestrator | Sunday 22 June 2025 20:02:11 +0000 (0:00:25.067) 0:02:20.666 *********** 2025-06-22 20:02:59.568350 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:59.568360 | orchestrator | 2025-06-22 20:02:59.568369 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-22 20:02:59.568379 | orchestrator | Sunday 22 June 2025 20:02:27 +0000 (0:00:15.552) 0:02:36.219 *********** 2025-06-22 20:02:59.568389 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:59.568399 | orchestrator | 2025-06-22 20:02:59.568408 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-22 20:02:59.568418 | orchestrator | 2025-06-22 20:02:59.568433 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-22 20:02:59.568443 | orchestrator | Sunday 22 June 2025 20:02:29 +0000 (0:00:02.485) 0:02:38.704 *********** 2025-06-22 20:02:59.568453 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.568463 | orchestrator | 2025-06-22 20:02:59.568472 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-22 20:02:59.568482 | orchestrator | Sunday 22 June 2025 20:02:40 +0000 (0:00:10.316) 0:02:49.021 *********** 2025-06-22 20:02:59.568492 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:59.568502 | orchestrator | 2025-06-22 20:02:59.568511 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-22 20:02:59.568521 | orchestrator | Sunday 22 June 2025 20:02:44 +0000 (0:00:04.508) 0:02:53.529 *********** 2025-06-22 20:02:59.568531 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:59.568540 | orchestrator | 2025-06-22 20:02:59.568550 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-22 20:02:59.568560 | orchestrator | 2025-06-22 20:02:59.568569 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-22 20:02:59.568579 | orchestrator | Sunday 22 June 2025 20:02:47 +0000 (0:00:02.413) 0:02:55.943 *********** 2025-06-22 20:02:59.568589 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:59.568599 | orchestrator | 2025-06-22 20:02:59.568609 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-22 20:02:59.568618 | orchestrator | Sunday 22 June 2025 20:02:47 +0000 (0:00:00.511) 0:02:56.454 *********** 2025-06-22 20:02:59.568628 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.568638 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.568647 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.568657 | orchestrator | 2025-06-22 20:02:59.568667 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-22 20:02:59.568676 | orchestrator | Sunday 22 June 2025 20:02:49 +0000 (0:00:02.078) 0:02:58.533 *********** 2025-06-22 20:02:59.568686 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.568695 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.568705 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.568715 | orchestrator | 2025-06-22 20:02:59.568724 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-22 20:02:59.568734 | orchestrator | Sunday 22 June 2025 20:02:51 +0000 (0:00:01.830) 0:03:00.363 *********** 2025-06-22 20:02:59.568744 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.568753 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.568763 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.568773 | orchestrator | 2025-06-22 20:02:59.568782 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-22 20:02:59.568792 | orchestrator | Sunday 22 June 2025 20:02:53 +0000 (0:00:02.127) 0:03:02.491 *********** 2025-06-22 20:02:59.568802 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.568812 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.568821 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:59.568831 | orchestrator | 2025-06-22 20:02:59.568840 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-22 20:02:59.568850 | orchestrator | Sunday 22 June 2025 20:02:55 +0000 (0:00:02.083) 0:03:04.574 *********** 2025-06-22 20:02:59.568865 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:59.568874 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:59.568884 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:59.568894 | orchestrator | 2025-06-22 20:02:59.568903 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-22 20:02:59.568913 | orchestrator | Sunday 22 June 2025 20:02:58 +0000 (0:00:03.019) 0:03:07.594 *********** 2025-06-22 20:02:59.568923 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:59.568933 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:59.568942 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:59.568952 | orchestrator | 2025-06-22 20:02:59.568962 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:02:59.568979 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-22 20:02:59.568990 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-22 20:02:59.569001 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-22 20:02:59.569011 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-22 20:02:59.569021 | orchestrator | 2025-06-22 20:02:59.569030 | orchestrator | 2025-06-22 20:02:59.569040 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:02:59.569050 | orchestrator | Sunday 22 June 2025 20:02:59 +0000 (0:00:00.221) 0:03:07.816 *********** 2025-06-22 20:02:59.569059 | orchestrator | =============================================================================== 2025-06-22 20:02:59.569069 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 43.13s 2025-06-22 20:02:59.569079 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.24s 2025-06-22 20:02:59.569088 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.07s 2025-06-22 20:02:59.569097 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.06s 2025-06-22 20:02:59.569107 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 10.32s 2025-06-22 20:02:59.569116 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.65s 2025-06-22 20:02:59.569143 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.99s 2025-06-22 20:02:59.569153 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.51s 2025-06-22 20:02:59.569163 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.15s 2025-06-22 20:02:59.569172 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.76s 2025-06-22 20:02:59.569182 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.38s 2025-06-22 20:02:59.569191 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.25s 2025-06-22 20:02:59.569201 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.17s 2025-06-22 20:02:59.569210 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.02s 2025-06-22 20:02:59.569220 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.95s 2025-06-22 20:02:59.569229 | orchestrator | Check MariaDB service --------------------------------------------------- 2.73s 2025-06-22 20:02:59.569239 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.66s 2025-06-22 20:02:59.569248 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 2.55s 2025-06-22 20:02:59.569258 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.49s 2025-06-22 20:02:59.569267 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.41s 2025-06-22 20:02:59.569283 | orchestrator | 2025-06-22 20:02:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:02.622357 | orchestrator | 2025-06-22 20:03:02 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:02.623661 | orchestrator | 2025-06-22 20:03:02 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:02.624437 | orchestrator | 2025-06-22 20:03:02 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:02.624461 | orchestrator | 2025-06-22 20:03:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:05.666626 | orchestrator | 2025-06-22 20:03:05 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:05.667533 | orchestrator | 2025-06-22 20:03:05 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:05.672035 | orchestrator | 2025-06-22 20:03:05 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:05.672061 | orchestrator | 2025-06-22 20:03:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:08.711587 | orchestrator | 2025-06-22 20:03:08 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:08.713636 | orchestrator | 2025-06-22 20:03:08 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:08.714302 | orchestrator | 2025-06-22 20:03:08 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:08.714538 | orchestrator | 2025-06-22 20:03:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:11.748559 | orchestrator | 2025-06-22 20:03:11 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:11.749195 | orchestrator | 2025-06-22 20:03:11 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:11.750281 | orchestrator | 2025-06-22 20:03:11 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:11.750891 | orchestrator | 2025-06-22 20:03:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:14.771427 | orchestrator | 2025-06-22 20:03:14 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:14.771634 | orchestrator | 2025-06-22 20:03:14 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:14.772171 | orchestrator | 2025-06-22 20:03:14 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:14.772722 | orchestrator | 2025-06-22 20:03:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:17.811684 | orchestrator | 2025-06-22 20:03:17 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:17.813511 | orchestrator | 2025-06-22 20:03:17 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:17.814626 | orchestrator | 2025-06-22 20:03:17 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:17.814661 | orchestrator | 2025-06-22 20:03:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:20.851397 | orchestrator | 2025-06-22 20:03:20 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:20.852908 | orchestrator | 2025-06-22 20:03:20 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:20.854370 | orchestrator | 2025-06-22 20:03:20 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:20.854493 | orchestrator | 2025-06-22 20:03:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:23.892216 | orchestrator | 2025-06-22 20:03:23 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:23.892309 | orchestrator | 2025-06-22 20:03:23 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:23.893336 | orchestrator | 2025-06-22 20:03:23 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:23.893373 | orchestrator | 2025-06-22 20:03:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:26.922716 | orchestrator | 2025-06-22 20:03:26 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:26.924326 | orchestrator | 2025-06-22 20:03:26 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:26.925979 | orchestrator | 2025-06-22 20:03:26 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:26.926008 | orchestrator | 2025-06-22 20:03:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:29.972327 | orchestrator | 2025-06-22 20:03:29 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:29.974260 | orchestrator | 2025-06-22 20:03:29 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:29.974950 | orchestrator | 2025-06-22 20:03:29 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:29.974977 | orchestrator | 2025-06-22 20:03:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:33.014382 | orchestrator | 2025-06-22 20:03:33 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:33.015039 | orchestrator | 2025-06-22 20:03:33 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:33.017173 | orchestrator | 2025-06-22 20:03:33 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:33.017218 | orchestrator | 2025-06-22 20:03:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:36.062747 | orchestrator | 2025-06-22 20:03:36 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:36.065033 | orchestrator | 2025-06-22 20:03:36 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:36.066693 | orchestrator | 2025-06-22 20:03:36 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:36.066957 | orchestrator | 2025-06-22 20:03:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:39.107843 | orchestrator | 2025-06-22 20:03:39 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:39.107931 | orchestrator | 2025-06-22 20:03:39 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:39.108260 | orchestrator | 2025-06-22 20:03:39 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:39.108279 | orchestrator | 2025-06-22 20:03:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:42.151749 | orchestrator | 2025-06-22 20:03:42 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:42.153522 | orchestrator | 2025-06-22 20:03:42 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:42.156103 | orchestrator | 2025-06-22 20:03:42 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:42.156181 | orchestrator | 2025-06-22 20:03:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:45.194879 | orchestrator | 2025-06-22 20:03:45 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:45.196174 | orchestrator | 2025-06-22 20:03:45 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:45.197195 | orchestrator | 2025-06-22 20:03:45 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:45.197231 | orchestrator | 2025-06-22 20:03:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:48.233504 | orchestrator | 2025-06-22 20:03:48 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:48.234217 | orchestrator | 2025-06-22 20:03:48 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:48.235221 | orchestrator | 2025-06-22 20:03:48 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:48.235263 | orchestrator | 2025-06-22 20:03:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:51.262729 | orchestrator | 2025-06-22 20:03:51 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:51.263115 | orchestrator | 2025-06-22 20:03:51 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:51.264464 | orchestrator | 2025-06-22 20:03:51 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:51.264778 | orchestrator | 2025-06-22 20:03:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:54.300202 | orchestrator | 2025-06-22 20:03:54 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:54.301611 | orchestrator | 2025-06-22 20:03:54 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:54.303543 | orchestrator | 2025-06-22 20:03:54 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:54.303623 | orchestrator | 2025-06-22 20:03:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:57.352738 | orchestrator | 2025-06-22 20:03:57 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:03:57.354489 | orchestrator | 2025-06-22 20:03:57 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:03:57.357344 | orchestrator | 2025-06-22 20:03:57 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:03:57.357375 | orchestrator | 2025-06-22 20:03:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:00.394843 | orchestrator | 2025-06-22 20:04:00 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:00.395398 | orchestrator | 2025-06-22 20:04:00 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:04:00.396683 | orchestrator | 2025-06-22 20:04:00 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:00.396712 | orchestrator | 2025-06-22 20:04:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:03.440179 | orchestrator | 2025-06-22 20:04:03 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:03.441842 | orchestrator | 2025-06-22 20:04:03 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:04:03.444056 | orchestrator | 2025-06-22 20:04:03 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:03.444169 | orchestrator | 2025-06-22 20:04:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:06.493591 | orchestrator | 2025-06-22 20:04:06 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:06.494674 | orchestrator | 2025-06-22 20:04:06 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:04:06.495496 | orchestrator | 2025-06-22 20:04:06 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:06.495643 | orchestrator | 2025-06-22 20:04:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:09.537054 | orchestrator | 2025-06-22 20:04:09 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:09.538489 | orchestrator | 2025-06-22 20:04:09 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:04:09.540651 | orchestrator | 2025-06-22 20:04:09 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:09.540984 | orchestrator | 2025-06-22 20:04:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:12.581852 | orchestrator | 2025-06-22 20:04:12 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:12.583105 | orchestrator | 2025-06-22 20:04:12 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:04:12.584916 | orchestrator | 2025-06-22 20:04:12 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:12.584941 | orchestrator | 2025-06-22 20:04:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:15.628230 | orchestrator | 2025-06-22 20:04:15 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:15.629381 | orchestrator | 2025-06-22 20:04:15 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:04:15.630814 | orchestrator | 2025-06-22 20:04:15 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:15.630848 | orchestrator | 2025-06-22 20:04:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:18.671049 | orchestrator | 2025-06-22 20:04:18 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:18.672703 | orchestrator | 2025-06-22 20:04:18 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:04:18.673997 | orchestrator | 2025-06-22 20:04:18 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:18.674126 | orchestrator | 2025-06-22 20:04:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:21.710290 | orchestrator | 2025-06-22 20:04:21 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:21.711843 | orchestrator | 2025-06-22 20:04:21 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:04:21.713243 | orchestrator | 2025-06-22 20:04:21 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:21.713289 | orchestrator | 2025-06-22 20:04:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:24.755240 | orchestrator | 2025-06-22 20:04:24 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:24.756427 | orchestrator | 2025-06-22 20:04:24 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:04:24.758528 | orchestrator | 2025-06-22 20:04:24 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:24.758590 | orchestrator | 2025-06-22 20:04:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:27.805386 | orchestrator | 2025-06-22 20:04:27 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:27.807093 | orchestrator | 2025-06-22 20:04:27 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:04:27.809373 | orchestrator | 2025-06-22 20:04:27 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:27.809426 | orchestrator | 2025-06-22 20:04:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:30.852041 | orchestrator | 2025-06-22 20:04:30 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:30.852789 | orchestrator | 2025-06-22 20:04:30 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:04:30.854701 | orchestrator | 2025-06-22 20:04:30 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:30.854747 | orchestrator | 2025-06-22 20:04:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:33.901635 | orchestrator | 2025-06-22 20:04:33 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:33.902936 | orchestrator | 2025-06-22 20:04:33 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state STARTED 2025-06-22 20:04:33.906009 | orchestrator | 2025-06-22 20:04:33 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:33.906622 | orchestrator | 2025-06-22 20:04:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:36.961265 | orchestrator | 2025-06-22 20:04:36 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:36.964481 | orchestrator | 2025-06-22 20:04:36 | INFO  | Task 9c617488-ba5b-4df5-8c5d-d3b1ef159a4f is in state SUCCESS 2025-06-22 20:04:36.965911 | orchestrator | 2025-06-22 20:04:36.965945 | orchestrator | 2025-06-22 20:04:36.965963 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-22 20:04:36.965974 | orchestrator | 2025-06-22 20:04:36.965983 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-22 20:04:36.965991 | orchestrator | Sunday 22 June 2025 20:02:27 +0000 (0:00:00.443) 0:00:00.443 *********** 2025-06-22 20:04:36.966000 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:04:36.966009 | orchestrator | 2025-06-22 20:04:36.966047 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-22 20:04:36.966058 | orchestrator | Sunday 22 June 2025 20:02:27 +0000 (0:00:00.566) 0:00:01.009 *********** 2025-06-22 20:04:36.966066 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.966075 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.966084 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.966092 | orchestrator | 2025-06-22 20:04:36.966100 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-22 20:04:36.966108 | orchestrator | Sunday 22 June 2025 20:02:28 +0000 (0:00:00.563) 0:00:01.573 *********** 2025-06-22 20:04:36.966116 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.966137 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.966145 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.966174 | orchestrator | 2025-06-22 20:04:36.966183 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-22 20:04:36.966191 | orchestrator | Sunday 22 June 2025 20:02:28 +0000 (0:00:00.265) 0:00:01.838 *********** 2025-06-22 20:04:36.966199 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.966207 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.966215 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.966222 | orchestrator | 2025-06-22 20:04:36.966230 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-22 20:04:36.966238 | orchestrator | Sunday 22 June 2025 20:02:29 +0000 (0:00:00.783) 0:00:02.621 *********** 2025-06-22 20:04:36.966246 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.966254 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.966262 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.966290 | orchestrator | 2025-06-22 20:04:36.966299 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-22 20:04:36.966307 | orchestrator | Sunday 22 June 2025 20:02:29 +0000 (0:00:00.273) 0:00:02.895 *********** 2025-06-22 20:04:36.966315 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.966323 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.966330 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.966338 | orchestrator | 2025-06-22 20:04:36.966346 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-22 20:04:36.966354 | orchestrator | Sunday 22 June 2025 20:02:30 +0000 (0:00:00.263) 0:00:03.159 *********** 2025-06-22 20:04:36.966362 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.966370 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.966377 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.966385 | orchestrator | 2025-06-22 20:04:36.966394 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-22 20:04:36.966402 | orchestrator | Sunday 22 June 2025 20:02:30 +0000 (0:00:00.286) 0:00:03.445 *********** 2025-06-22 20:04:36.966410 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.966418 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.966426 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.966434 | orchestrator | 2025-06-22 20:04:36.966442 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-22 20:04:36.966449 | orchestrator | Sunday 22 June 2025 20:02:30 +0000 (0:00:00.490) 0:00:03.935 *********** 2025-06-22 20:04:36.966457 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.966465 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.966473 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.966481 | orchestrator | 2025-06-22 20:04:36.966488 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-22 20:04:36.966497 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.299) 0:00:04.235 *********** 2025-06-22 20:04:36.966504 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:04:36.966512 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:04:36.966520 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:04:36.966528 | orchestrator | 2025-06-22 20:04:36.966536 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-22 20:04:36.966544 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.595) 0:00:04.831 *********** 2025-06-22 20:04:36.966552 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.966559 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.966668 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.966678 | orchestrator | 2025-06-22 20:04:36.966734 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-22 20:04:36.966745 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.398) 0:00:05.230 *********** 2025-06-22 20:04:36.966753 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:04:36.966761 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:04:36.967044 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:04:36.967056 | orchestrator | 2025-06-22 20:04:36.967064 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-22 20:04:36.967072 | orchestrator | Sunday 22 June 2025 20:02:34 +0000 (0:00:02.050) 0:00:07.280 *********** 2025-06-22 20:04:36.967080 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 20:04:36.967088 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 20:04:36.967096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 20:04:36.967104 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.967112 | orchestrator | 2025-06-22 20:04:36.967160 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-22 20:04:36.967191 | orchestrator | Sunday 22 June 2025 20:02:34 +0000 (0:00:00.388) 0:00:07.669 *********** 2025-06-22 20:04:36.967201 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.967212 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.967220 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.967229 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.967237 | orchestrator | 2025-06-22 20:04:36.967416 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-22 20:04:36.967426 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:00.692) 0:00:08.361 *********** 2025-06-22 20:04:36.967468 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.967480 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.967488 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.967497 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.967505 | orchestrator | 2025-06-22 20:04:36.967513 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-22 20:04:36.967521 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:00.172) 0:00:08.534 *********** 2025-06-22 20:04:36.967531 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'd9bf42eafe53', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-22 20:02:32.717365', 'end': '2025-06-22 20:02:32.757683', 'delta': '0:00:00.040318', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['d9bf42eafe53'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-22 20:04:36.967545 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '193df39412e1', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-22 20:02:33.419665', 'end': '2025-06-22 20:02:33.464969', 'delta': '0:00:00.045304', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['193df39412e1'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-22 20:04:36.967586 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '12f76fe8576d', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-22 20:02:33.934371', 'end': '2025-06-22 20:02:33.994465', 'delta': '0:00:00.060094', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['12f76fe8576d'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-22 20:04:36.967596 | orchestrator | 2025-06-22 20:04:36.967604 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-22 20:04:36.967612 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:00.312) 0:00:08.846 *********** 2025-06-22 20:04:36.967620 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.967628 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.967636 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.967644 | orchestrator | 2025-06-22 20:04:36.967652 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-22 20:04:36.967660 | orchestrator | Sunday 22 June 2025 20:02:36 +0000 (0:00:00.380) 0:00:09.227 *********** 2025-06-22 20:04:36.967668 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-22 20:04:36.967676 | orchestrator | 2025-06-22 20:04:36.967684 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-22 20:04:36.967691 | orchestrator | Sunday 22 June 2025 20:02:37 +0000 (0:00:01.715) 0:00:10.943 *********** 2025-06-22 20:04:36.967699 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.967707 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.967715 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.967723 | orchestrator | 2025-06-22 20:04:36.967731 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-22 20:04:36.967739 | orchestrator | Sunday 22 June 2025 20:02:38 +0000 (0:00:00.229) 0:00:11.172 *********** 2025-06-22 20:04:36.967746 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.967754 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.967762 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.967770 | orchestrator | 2025-06-22 20:04:36.967778 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 20:04:36.967786 | orchestrator | Sunday 22 June 2025 20:02:38 +0000 (0:00:00.388) 0:00:11.561 *********** 2025-06-22 20:04:36.967793 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.967801 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.967809 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.967817 | orchestrator | 2025-06-22 20:04:36.967825 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-22 20:04:36.967833 | orchestrator | Sunday 22 June 2025 20:02:38 +0000 (0:00:00.357) 0:00:11.919 *********** 2025-06-22 20:04:36.967841 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.967849 | orchestrator | 2025-06-22 20:04:36.967857 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-22 20:04:36.967865 | orchestrator | Sunday 22 June 2025 20:02:38 +0000 (0:00:00.122) 0:00:12.041 *********** 2025-06-22 20:04:36.967873 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.967881 | orchestrator | 2025-06-22 20:04:36.967889 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 20:04:36.967901 | orchestrator | Sunday 22 June 2025 20:02:39 +0000 (0:00:00.207) 0:00:12.249 *********** 2025-06-22 20:04:36.967909 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.967917 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.967925 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.967933 | orchestrator | 2025-06-22 20:04:36.967941 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-22 20:04:36.967949 | orchestrator | Sunday 22 June 2025 20:02:39 +0000 (0:00:00.255) 0:00:12.504 *********** 2025-06-22 20:04:36.967957 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.968011 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.968020 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.968027 | orchestrator | 2025-06-22 20:04:36.968035 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-22 20:04:36.968044 | orchestrator | Sunday 22 June 2025 20:02:39 +0000 (0:00:00.279) 0:00:12.784 *********** 2025-06-22 20:04:36.968052 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.968059 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.968067 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.968075 | orchestrator | 2025-06-22 20:04:36.968083 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-22 20:04:36.968091 | orchestrator | Sunday 22 June 2025 20:02:40 +0000 (0:00:00.388) 0:00:13.173 *********** 2025-06-22 20:04:36.968099 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.968107 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.968115 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.968147 | orchestrator | 2025-06-22 20:04:36.968156 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-22 20:04:36.968168 | orchestrator | Sunday 22 June 2025 20:02:40 +0000 (0:00:00.279) 0:00:13.452 *********** 2025-06-22 20:04:36.968176 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.968184 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.968192 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.968200 | orchestrator | 2025-06-22 20:04:36.968208 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-22 20:04:36.968216 | orchestrator | Sunday 22 June 2025 20:02:40 +0000 (0:00:00.273) 0:00:13.726 *********** 2025-06-22 20:04:36.968224 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.968232 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.968240 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.968248 | orchestrator | 2025-06-22 20:04:36.968256 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-22 20:04:36.968289 | orchestrator | Sunday 22 June 2025 20:02:40 +0000 (0:00:00.286) 0:00:14.012 *********** 2025-06-22 20:04:36.968298 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.968306 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.968314 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.968322 | orchestrator | 2025-06-22 20:04:36.968330 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-22 20:04:36.968337 | orchestrator | Sunday 22 June 2025 20:02:41 +0000 (0:00:00.419) 0:00:14.431 *********** 2025-06-22 20:04:36.968346 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--988500a7--3c26--5f89--b599--1c63900dc902-osd--block--988500a7--3c26--5f89--b599--1c63900dc902', 'dm-uuid-LVM-ZZ2TtSjbnMojwAj3mtQDARFQeMNsdJxjTzVJdkK9yPtVvNy9jvy7424QwNw0aPi5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f1623286--8630--50a6--960f--aa7fe8c22ac9-osd--block--f1623286--8630--50a6--960f--aa7fe8c22ac9', 'dm-uuid-LVM-pvws3no6YnWFb5jLLz15f4x3pF8jIM7ceJ2LSdtQSj3b4EnkSIQUHqz557SL12cs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968379 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968407 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968447 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968455 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--809c9636--3d83--5d3b--8a98--356a4387ae79-osd--block--809c9636--3d83--5d3b--8a98--356a4387ae79', 'dm-uuid-LVM-HNvb7P5UdpqK3mwinBCUfOavIhvtapZSo26fe1u9TpjVyD7pwEfLe1urhCckwvSQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--988500a7--3c26--5f89--b599--1c63900dc902-osd--block--988500a7--3c26--5f89--b599--1c63900dc902'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aHnAQW-CY2S-QPcG-urub-1Uq7-6Nhu-1ETFxy', 'scsi-0QEMU_QEMU_HARDDISK_d397d31c-b886-4607-b3cb-2d758622dade', 'scsi-SQEMU_QEMU_HARDDISK_d397d31c-b886-4607-b3cb-2d758622dade'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e-osd--block--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e', 'dm-uuid-LVM-cs6yb6i6fq5eydSrrPQKaabKKtL0P8Uw5rl2V5e79OcpIQ83rtE6ZFduJNqoKS8E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--f1623286--8630--50a6--960f--aa7fe8c22ac9-osd--block--f1623286--8630--50a6--960f--aa7fe8c22ac9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FsWRfU-wdE9-1nsq-H9R0-fLa0-WHjk-2DzLs0', 'scsi-0QEMU_QEMU_HARDDISK_66d4c0b6-de40-44d2-a991-376660387b3d', 'scsi-SQEMU_QEMU_HARDDISK_66d4c0b6-de40-44d2-a991-376660387b3d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968557 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f438dec5-52e6-4e07-b468-2b34fd5e0bbc', 'scsi-SQEMU_QEMU_HARDDISK_f438dec5-52e6-4e07-b468-2b34fd5e0bbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968616 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968624 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.968637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968646 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968665 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--809c9636--3d83--5d3b--8a98--356a4387ae79-osd--block--809c9636--3d83--5d3b--8a98--356a4387ae79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Y5sRIk-LsA5-itBW-li2o-MNUJ-ffYN-BNgrYU', 'scsi-0QEMU_QEMU_HARDDISK_9d381e45-09fd-4a20-ab1c-6f33bb7ad47a', 'scsi-SQEMU_QEMU_HARDDISK_9d381e45-09fd-4a20-ab1c-6f33bb7ad47a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e-osd--block--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-S3aXuW-Blj8-bafQ-BtEV-2zgi-BrrR-g7gMaT', 'scsi-0QEMU_QEMU_HARDDISK_ca04149b-3774-4fe5-a4a8-e7007e740a3b', 'scsi-SQEMU_QEMU_HARDDISK_ca04149b-3774-4fe5-a4a8-e7007e740a3b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10758f5a-a518-4894-b68c-79c541e050d1', 'scsi-SQEMU_QEMU_HARDDISK_10758f5a-a518-4894-b68c-79c541e050d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968744 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.968756 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b2f14396--315c--50f9--a6a7--8817318b41c3-osd--block--b2f14396--315c--50f9--a6a7--8817318b41c3', 'dm-uuid-LVM-qAyMel2csi5wA2oHi0eSYgu6TRAIHRBr0CRB2s2crF0E3DsICXFrbq2cprESQylt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968770 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--60bbbdec--af53--55ad--b293--31f676104815-osd--block--60bbbdec--af53--55ad--b293--31f676104815', 'dm-uuid-LVM-a3vy1AKtadDZuu1qWxIr3lZp7NOsXyj8EpgcK9hcB9JuQNvZo9XvtcRp6hzkTg97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968791 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968808 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968816 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968824 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968840 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:04:36.968859 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b2f14396--315c--50f9--a6a7--8817318b41c3-osd--block--b2f14396--315c--50f9--a6a7--8817318b41c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YrvepJ-gtxc-rI1A-6L49-iRD4-STYx-7gD10V', 'scsi-0QEMU_QEMU_HARDDISK_986f77d9-7eeb-491e-bdbe-4c9e8ad066d2', 'scsi-SQEMU_QEMU_HARDDISK_986f77d9-7eeb-491e-bdbe-4c9e8ad066d2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--60bbbdec--af53--55ad--b293--31f676104815-osd--block--60bbbdec--af53--55ad--b293--31f676104815'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YeYW6l-qb1k-qFKK-wbuM-lxrP-uce0-4eVM95', 'scsi-0QEMU_QEMU_HARDDISK_f12434e6-788f-4ffb-a434-d641146d84ae', 'scsi-SQEMU_QEMU_HARDDISK_f12434e6-788f-4ffb-a434-d641146d84ae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968894 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3712533-4ba6-4a13-8d22-1afd9c8ce6f2', 'scsi-SQEMU_QEMU_HARDDISK_b3712533-4ba6-4a13-8d22-1afd9c8ce6f2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:04:36.968920 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.968929 | orchestrator | 2025-06-22 20:04:36.968937 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-22 20:04:36.968945 | orchestrator | Sunday 22 June 2025 20:02:41 +0000 (0:00:00.556) 0:00:14.987 *********** 2025-06-22 20:04:36.968954 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--988500a7--3c26--5f89--b599--1c63900dc902-osd--block--988500a7--3c26--5f89--b599--1c63900dc902', 'dm-uuid-LVM-ZZ2TtSjbnMojwAj3mtQDARFQeMNsdJxjTzVJdkK9yPtVvNy9jvy7424QwNw0aPi5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.968962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f1623286--8630--50a6--960f--aa7fe8c22ac9-osd--block--f1623286--8630--50a6--960f--aa7fe8c22ac9', 'dm-uuid-LVM-pvws3no6YnWFb5jLLz15f4x3pF8jIM7ceJ2LSdtQSj3b4EnkSIQUHqz557SL12cs'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.968971 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.968979 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.968990 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969009 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969017 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969026 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969034 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969043 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--809c9636--3d83--5d3b--8a98--356a4387ae79-osd--block--809c9636--3d83--5d3b--8a98--356a4387ae79', 'dm-uuid-LVM-HNvb7P5UdpqK3mwinBCUfOavIhvtapZSo26fe1u9TpjVyD7pwEfLe1urhCckwvSQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969051 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969072 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e-osd--block--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e', 'dm-uuid-LVM-cs6yb6i6fq5eydSrrPQKaabKKtL0P8Uw5rl2V5e79OcpIQ83rtE6ZFduJNqoKS8E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969082 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part1', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part14', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part15', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part16', 'scsi-SQEMU_QEMU_HARDDISK_e5fdbcb2-2928-4111-aea0-2f8879b135c3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969091 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969103 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--988500a7--3c26--5f89--b599--1c63900dc902-osd--block--988500a7--3c26--5f89--b599--1c63900dc902'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-aHnAQW-CY2S-QPcG-urub-1Uq7-6Nhu-1ETFxy', 'scsi-0QEMU_QEMU_HARDDISK_d397d31c-b886-4607-b3cb-2d758622dade', 'scsi-SQEMU_QEMU_HARDDISK_d397d31c-b886-4607-b3cb-2d758622dade'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969133 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969143 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--f1623286--8630--50a6--960f--aa7fe8c22ac9-osd--block--f1623286--8630--50a6--960f--aa7fe8c22ac9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-FsWRfU-wdE9-1nsq-H9R0-fLa0-WHjk-2DzLs0', 'scsi-0QEMU_QEMU_HARDDISK_66d4c0b6-de40-44d2-a991-376660387b3d', 'scsi-SQEMU_QEMU_HARDDISK_66d4c0b6-de40-44d2-a991-376660387b3d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969151 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969160 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f438dec5-52e6-4e07-b468-2b34fd5e0bbc', 'scsi-SQEMU_QEMU_HARDDISK_f438dec5-52e6-4e07-b468-2b34fd5e0bbc'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969242 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969270 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-53-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969279 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.969287 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969295 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969304 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969312 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969330 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part1', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part14', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part15', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part16', 'scsi-SQEMU_QEMU_HARDDISK_0f50b971-8498-45b8-bf00-ff2a09b130da-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969346 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--809c9636--3d83--5d3b--8a98--356a4387ae79-osd--block--809c9636--3d83--5d3b--8a98--356a4387ae79'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Y5sRIk-LsA5-itBW-li2o-MNUJ-ffYN-BNgrYU', 'scsi-0QEMU_QEMU_HARDDISK_9d381e45-09fd-4a20-ab1c-6f33bb7ad47a', 'scsi-SQEMU_QEMU_HARDDISK_9d381e45-09fd-4a20-ab1c-6f33bb7ad47a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969355 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e-osd--block--0f31c53c--bcdf--5bd2--bfc5--d0de6e74979e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-S3aXuW-Blj8-bafQ-BtEV-2zgi-BrrR-g7gMaT', 'scsi-0QEMU_QEMU_HARDDISK_ca04149b-3774-4fe5-a4a8-e7007e740a3b', 'scsi-SQEMU_QEMU_HARDDISK_ca04149b-3774-4fe5-a4a8-e7007e740a3b'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969366 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b2f14396--315c--50f9--a6a7--8817318b41c3-osd--block--b2f14396--315c--50f9--a6a7--8817318b41c3', 'dm-uuid-LVM-qAyMel2csi5wA2oHi0eSYgu6TRAIHRBr0CRB2s2crF0E3DsICXFrbq2cprESQylt'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969384 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_10758f5a-a518-4894-b68c-79c541e050d1', 'scsi-SQEMU_QEMU_HARDDISK_10758f5a-a518-4894-b68c-79c541e050d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969393 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--60bbbdec--af53--55ad--b293--31f676104815-osd--block--60bbbdec--af53--55ad--b293--31f676104815', 'dm-uuid-LVM-a3vy1AKtadDZuu1qWxIr3lZp7NOsXyj8EpgcK9hcB9JuQNvZo9XvtcRp6hzkTg97'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969402 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969410 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969418 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.969427 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969443 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969455 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969464 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969472 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969481 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969489 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969510 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_3ad08e50-43a2-44f4-b591-5a498ff5d4c6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969520 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--b2f14396--315c--50f9--a6a7--8817318b41c3-osd--block--b2f14396--315c--50f9--a6a7--8817318b41c3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YrvepJ-gtxc-rI1A-6L49-iRD4-STYx-7gD10V', 'scsi-0QEMU_QEMU_HARDDISK_986f77d9-7eeb-491e-bdbe-4c9e8ad066d2', 'scsi-SQEMU_QEMU_HARDDISK_986f77d9-7eeb-491e-bdbe-4c9e8ad066d2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969529 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--60bbbdec--af53--55ad--b293--31f676104815-osd--block--60bbbdec--af53--55ad--b293--31f676104815'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-YeYW6l-qb1k-qFKK-wbuM-lxrP-uce0-4eVM95', 'scsi-0QEMU_QEMU_HARDDISK_f12434e6-788f-4ffb-a434-d641146d84ae', 'scsi-SQEMU_QEMU_HARDDISK_f12434e6-788f-4ffb-a434-d641146d84ae'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b3712533-4ba6-4a13-8d22-1afd9c8ce6f2', 'scsi-SQEMU_QEMU_HARDDISK_b3712533-4ba6-4a13-8d22-1afd9c8ce6f2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969558 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:04:36.969566 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.969574 | orchestrator | 2025-06-22 20:04:36.969583 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-22 20:04:36.969591 | orchestrator | Sunday 22 June 2025 20:02:42 +0000 (0:00:00.582) 0:00:15.570 *********** 2025-06-22 20:04:36.969599 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.969607 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.969615 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.969623 | orchestrator | 2025-06-22 20:04:36.969631 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-22 20:04:36.969639 | orchestrator | Sunday 22 June 2025 20:02:43 +0000 (0:00:00.653) 0:00:16.223 *********** 2025-06-22 20:04:36.969647 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.969654 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.969662 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.969670 | orchestrator | 2025-06-22 20:04:36.969678 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 20:04:36.969686 | orchestrator | Sunday 22 June 2025 20:02:43 +0000 (0:00:00.445) 0:00:16.668 *********** 2025-06-22 20:04:36.969694 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.969702 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.969710 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.969718 | orchestrator | 2025-06-22 20:04:36.969726 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 20:04:36.969734 | orchestrator | Sunday 22 June 2025 20:02:44 +0000 (0:00:00.649) 0:00:17.318 *********** 2025-06-22 20:04:36.969742 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.969750 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.969758 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.969765 | orchestrator | 2025-06-22 20:04:36.969773 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 20:04:36.969781 | orchestrator | Sunday 22 June 2025 20:02:44 +0000 (0:00:00.288) 0:00:17.607 *********** 2025-06-22 20:04:36.969794 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.969802 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.969809 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.969817 | orchestrator | 2025-06-22 20:04:36.969825 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 20:04:36.969833 | orchestrator | Sunday 22 June 2025 20:02:44 +0000 (0:00:00.428) 0:00:18.035 *********** 2025-06-22 20:04:36.969841 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.969849 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.969857 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.969865 | orchestrator | 2025-06-22 20:04:36.969873 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-22 20:04:36.969883 | orchestrator | Sunday 22 June 2025 20:02:45 +0000 (0:00:00.549) 0:00:18.584 *********** 2025-06-22 20:04:36.969892 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-22 20:04:36.969901 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-22 20:04:36.969910 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-22 20:04:36.969919 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-22 20:04:36.969928 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-22 20:04:36.969937 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-22 20:04:36.969946 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-22 20:04:36.969955 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-22 20:04:36.969965 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-22 20:04:36.969973 | orchestrator | 2025-06-22 20:04:36.969982 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-22 20:04:36.969991 | orchestrator | Sunday 22 June 2025 20:02:46 +0000 (0:00:00.840) 0:00:19.425 *********** 2025-06-22 20:04:36.970000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 20:04:36.970009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 20:04:36.970055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 20:04:36.970065 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.970074 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-22 20:04:36.970083 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-22 20:04:36.970092 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-22 20:04:36.970101 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.970110 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-22 20:04:36.970119 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-22 20:04:36.970166 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-22 20:04:36.970176 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.970185 | orchestrator | 2025-06-22 20:04:36.970198 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-22 20:04:36.970208 | orchestrator | Sunday 22 June 2025 20:02:46 +0000 (0:00:00.341) 0:00:19.767 *********** 2025-06-22 20:04:36.970217 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:04:36.970226 | orchestrator | 2025-06-22 20:04:36.970236 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-22 20:04:36.970245 | orchestrator | Sunday 22 June 2025 20:02:47 +0000 (0:00:00.726) 0:00:20.494 *********** 2025-06-22 20:04:36.970253 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.970261 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.970269 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.970277 | orchestrator | 2025-06-22 20:04:36.970290 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-22 20:04:36.970299 | orchestrator | Sunday 22 June 2025 20:02:47 +0000 (0:00:00.309) 0:00:20.803 *********** 2025-06-22 20:04:36.970321 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.970330 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.970337 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.970345 | orchestrator | 2025-06-22 20:04:36.970353 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-22 20:04:36.970361 | orchestrator | Sunday 22 June 2025 20:02:47 +0000 (0:00:00.295) 0:00:21.099 *********** 2025-06-22 20:04:36.970369 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.970377 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.970385 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:04:36.970393 | orchestrator | 2025-06-22 20:04:36.970401 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-22 20:04:36.970409 | orchestrator | Sunday 22 June 2025 20:02:48 +0000 (0:00:00.309) 0:00:21.408 *********** 2025-06-22 20:04:36.970417 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.970425 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.970433 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.970441 | orchestrator | 2025-06-22 20:04:36.970448 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-22 20:04:36.970456 | orchestrator | Sunday 22 June 2025 20:02:48 +0000 (0:00:00.587) 0:00:21.996 *********** 2025-06-22 20:04:36.970464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:04:36.970472 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:04:36.970480 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:04:36.970488 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.970496 | orchestrator | 2025-06-22 20:04:36.970504 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-22 20:04:36.970512 | orchestrator | Sunday 22 June 2025 20:02:49 +0000 (0:00:00.371) 0:00:22.368 *********** 2025-06-22 20:04:36.970520 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:04:36.970528 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:04:36.970535 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:04:36.970543 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.970551 | orchestrator | 2025-06-22 20:04:36.970559 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-22 20:04:36.970567 | orchestrator | Sunday 22 June 2025 20:02:49 +0000 (0:00:00.394) 0:00:22.762 *********** 2025-06-22 20:04:36.970575 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:04:36.970583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:04:36.970591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:04:36.970599 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.970607 | orchestrator | 2025-06-22 20:04:36.970615 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-22 20:04:36.970623 | orchestrator | Sunday 22 June 2025 20:02:49 +0000 (0:00:00.362) 0:00:23.125 *********** 2025-06-22 20:04:36.970631 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:04:36.970639 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:04:36.970647 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:04:36.970655 | orchestrator | 2025-06-22 20:04:36.970663 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-22 20:04:36.970671 | orchestrator | Sunday 22 June 2025 20:02:50 +0000 (0:00:00.323) 0:00:23.448 *********** 2025-06-22 20:04:36.970678 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 20:04:36.970686 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-22 20:04:36.970694 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-22 20:04:36.970702 | orchestrator | 2025-06-22 20:04:36.970710 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-22 20:04:36.970718 | orchestrator | Sunday 22 June 2025 20:02:50 +0000 (0:00:00.495) 0:00:23.943 *********** 2025-06-22 20:04:36.970726 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:04:36.970740 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:04:36.970748 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:04:36.970756 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 20:04:36.970764 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 20:04:36.970772 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 20:04:36.970780 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 20:04:36.970787 | orchestrator | 2025-06-22 20:04:36.970795 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-22 20:04:36.970803 | orchestrator | Sunday 22 June 2025 20:02:51 +0000 (0:00:00.940) 0:00:24.884 *********** 2025-06-22 20:04:36.970817 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:04:36.970826 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:04:36.970834 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:04:36.970842 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 20:04:36.970850 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 20:04:36.970858 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 20:04:36.970866 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 20:04:36.970873 | orchestrator | 2025-06-22 20:04:36.970885 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-22 20:04:36.970894 | orchestrator | Sunday 22 June 2025 20:02:53 +0000 (0:00:01.882) 0:00:26.767 *********** 2025-06-22 20:04:36.970901 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:04:36.970909 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:04:36.970917 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-22 20:04:36.970925 | orchestrator | 2025-06-22 20:04:36.970933 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-22 20:04:36.970941 | orchestrator | Sunday 22 June 2025 20:02:54 +0000 (0:00:00.385) 0:00:27.152 *********** 2025-06-22 20:04:36.970950 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:04:36.970959 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:04:36.970967 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:04:36.970975 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:04:36.970984 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:04:36.970997 | orchestrator | 2025-06-22 20:04:36.971005 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-22 20:04:36.971013 | orchestrator | Sunday 22 June 2025 20:03:38 +0000 (0:00:44.947) 0:01:12.099 *********** 2025-06-22 20:04:36.971021 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971029 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971037 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971044 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971052 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971060 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971068 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-22 20:04:36.971076 | orchestrator | 2025-06-22 20:04:36.971084 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-22 20:04:36.971092 | orchestrator | Sunday 22 June 2025 20:04:04 +0000 (0:00:25.550) 0:01:37.649 *********** 2025-06-22 20:04:36.971100 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971108 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971116 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971134 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971142 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971150 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971158 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:04:36.971166 | orchestrator | 2025-06-22 20:04:36.971173 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-22 20:04:36.971185 | orchestrator | Sunday 22 June 2025 20:04:16 +0000 (0:00:12.165) 0:01:49.815 *********** 2025-06-22 20:04:36.971193 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971201 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:04:36.971209 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:04:36.971217 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971225 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:04:36.971233 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:04:36.971245 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971253 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:04:36.971261 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:04:36.971269 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971277 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:04:36.971285 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:04:36.971293 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971301 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:04:36.971308 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:04:36.971316 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:04:36.971329 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:04:36.971337 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:04:36.971345 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-22 20:04:36.971353 | orchestrator | 2025-06-22 20:04:36.971361 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:04:36.971369 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-22 20:04:36.971377 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-22 20:04:36.971386 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-22 20:04:36.971394 | orchestrator | 2025-06-22 20:04:36.971401 | orchestrator | 2025-06-22 20:04:36.971409 | orchestrator | 2025-06-22 20:04:36.971417 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:04:36.971425 | orchestrator | Sunday 22 June 2025 20:04:33 +0000 (0:00:17.078) 0:02:06.893 *********** 2025-06-22 20:04:36.971433 | orchestrator | =============================================================================== 2025-06-22 20:04:36.971441 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.95s 2025-06-22 20:04:36.971449 | orchestrator | generate keys ---------------------------------------------------------- 25.55s 2025-06-22 20:04:36.971457 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.08s 2025-06-22 20:04:36.971465 | orchestrator | get keys from monitors ------------------------------------------------- 12.17s 2025-06-22 20:04:36.971473 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.05s 2025-06-22 20:04:36.971480 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.88s 2025-06-22 20:04:36.971488 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.72s 2025-06-22 20:04:36.971496 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.94s 2025-06-22 20:04:36.971504 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.84s 2025-06-22 20:04:36.971512 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.78s 2025-06-22 20:04:36.971519 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2025-06-22 20:04:36.971527 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.69s 2025-06-22 20:04:36.971535 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.65s 2025-06-22 20:04:36.971543 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.65s 2025-06-22 20:04:36.971551 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2025-06-22 20:04:36.971559 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.59s 2025-06-22 20:04:36.971566 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2025-06-22 20:04:36.971574 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.57s 2025-06-22 20:04:36.971582 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.56s 2025-06-22 20:04:36.971590 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.56s 2025-06-22 20:04:36.971602 | orchestrator | 2025-06-22 20:04:36 | INFO  | Task 3b54d1fd-40dd-45d3-9854-eaebc112cc72 is in state STARTED 2025-06-22 20:04:36.971611 | orchestrator | 2025-06-22 20:04:36 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:36.971619 | orchestrator | 2025-06-22 20:04:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:40.008048 | orchestrator | 2025-06-22 20:04:40 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:40.008748 | orchestrator | 2025-06-22 20:04:40 | INFO  | Task 3b54d1fd-40dd-45d3-9854-eaebc112cc72 is in state STARTED 2025-06-22 20:04:40.010594 | orchestrator | 2025-06-22 20:04:40 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:40.010623 | orchestrator | 2025-06-22 20:04:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:43.051070 | orchestrator | 2025-06-22 20:04:43 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:43.051845 | orchestrator | 2025-06-22 20:04:43 | INFO  | Task 3b54d1fd-40dd-45d3-9854-eaebc112cc72 is in state STARTED 2025-06-22 20:04:43.053621 | orchestrator | 2025-06-22 20:04:43 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:43.053645 | orchestrator | 2025-06-22 20:04:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:46.099274 | orchestrator | 2025-06-22 20:04:46 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:46.100741 | orchestrator | 2025-06-22 20:04:46 | INFO  | Task 3b54d1fd-40dd-45d3-9854-eaebc112cc72 is in state STARTED 2025-06-22 20:04:46.104091 | orchestrator | 2025-06-22 20:04:46 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state STARTED 2025-06-22 20:04:46.104225 | orchestrator | 2025-06-22 20:04:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:49.160910 | orchestrator | 2025-06-22 20:04:49 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:49.162145 | orchestrator | 2025-06-22 20:04:49 | INFO  | Task 3b54d1fd-40dd-45d3-9854-eaebc112cc72 is in state STARTED 2025-06-22 20:04:49.165624 | orchestrator | 2025-06-22 20:04:49 | INFO  | Task 16c1a5c5-73a1-451e-9cf0-eb23c19bf040 is in state SUCCESS 2025-06-22 20:04:49.167440 | orchestrator | 2025-06-22 20:04:49.167470 | orchestrator | 2025-06-22 20:04:49.167481 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:04:49.167491 | orchestrator | 2025-06-22 20:04:49.167501 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:04:49.167510 | orchestrator | Sunday 22 June 2025 20:03:03 +0000 (0:00:00.258) 0:00:00.258 *********** 2025-06-22 20:04:49.167520 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:49.167531 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:49.167540 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:49.167549 | orchestrator | 2025-06-22 20:04:49.167558 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:04:49.167568 | orchestrator | Sunday 22 June 2025 20:03:03 +0000 (0:00:00.292) 0:00:00.550 *********** 2025-06-22 20:04:49.167577 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-22 20:04:49.167587 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-22 20:04:49.167596 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-22 20:04:49.167659 | orchestrator | 2025-06-22 20:04:49.167669 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-22 20:04:49.167678 | orchestrator | 2025-06-22 20:04:49.167687 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:04:49.167697 | orchestrator | Sunday 22 June 2025 20:03:04 +0000 (0:00:00.437) 0:00:00.987 *********** 2025-06-22 20:04:49.167729 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:04:49.167741 | orchestrator | 2025-06-22 20:04:49.167749 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-22 20:04:49.167758 | orchestrator | Sunday 22 June 2025 20:03:04 +0000 (0:00:00.486) 0:00:01.474 *********** 2025-06-22 20:04:49.167813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:49.167894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:49.167925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:49.167937 | orchestrator | 2025-06-22 20:04:49.167947 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-22 20:04:49.167956 | orchestrator | Sunday 22 June 2025 20:03:05 +0000 (0:00:01.103) 0:00:02.577 *********** 2025-06-22 20:04:49.167965 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:49.168297 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:49.168314 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:49.168324 | orchestrator | 2025-06-22 20:04:49.168333 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:04:49.168342 | orchestrator | Sunday 22 June 2025 20:03:06 +0000 (0:00:00.446) 0:00:03.024 *********** 2025-06-22 20:04:49.168351 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-22 20:04:49.168371 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-22 20:04:49.168380 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-22 20:04:49.168389 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-22 20:04:49.168398 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-22 20:04:49.168407 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-22 20:04:49.168415 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-22 20:04:49.168424 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-22 20:04:49.168433 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-22 20:04:49.168453 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-22 20:04:49.168462 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-22 20:04:49.168470 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-22 20:04:49.168479 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-22 20:04:49.168487 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-22 20:04:49.168496 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-22 20:04:49.168505 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-22 20:04:49.168513 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-22 20:04:49.168522 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-22 20:04:49.168530 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-22 20:04:49.168539 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-22 20:04:49.168547 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-22 20:04:49.168556 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-22 20:04:49.168565 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-22 20:04:49.168573 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-22 20:04:49.168590 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-22 20:04:49.168600 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-22 20:04:49.168609 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-22 20:04:49.168618 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-22 20:04:49.168627 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-22 20:04:49.168635 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-22 20:04:49.168644 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-22 20:04:49.168653 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-22 20:04:49.168661 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-22 20:04:49.168671 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-22 20:04:49.168679 | orchestrator | 2025-06-22 20:04:49.168689 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:49.168697 | orchestrator | Sunday 22 June 2025 20:03:06 +0000 (0:00:00.742) 0:00:03.766 *********** 2025-06-22 20:04:49.168706 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:49.168715 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:49.168729 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:49.168738 | orchestrator | 2025-06-22 20:04:49.168747 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:49.168755 | orchestrator | Sunday 22 June 2025 20:03:07 +0000 (0:00:00.299) 0:00:04.065 *********** 2025-06-22 20:04:49.168764 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.168773 | orchestrator | 2025-06-22 20:04:49.168788 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:49.168797 | orchestrator | Sunday 22 June 2025 20:03:07 +0000 (0:00:00.132) 0:00:04.198 *********** 2025-06-22 20:04:49.168806 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.168814 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.168823 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.168832 | orchestrator | 2025-06-22 20:04:49.168841 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:49.168849 | orchestrator | Sunday 22 June 2025 20:03:07 +0000 (0:00:00.510) 0:00:04.709 *********** 2025-06-22 20:04:49.168858 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:49.168867 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:49.168876 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:49.168884 | orchestrator | 2025-06-22 20:04:49.168893 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:49.168902 | orchestrator | Sunday 22 June 2025 20:03:08 +0000 (0:00:00.290) 0:00:04.999 *********** 2025-06-22 20:04:49.168910 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.168919 | orchestrator | 2025-06-22 20:04:49.168928 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:49.168937 | orchestrator | Sunday 22 June 2025 20:03:08 +0000 (0:00:00.140) 0:00:05.140 *********** 2025-06-22 20:04:49.168945 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.168954 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.168963 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.168971 | orchestrator | 2025-06-22 20:04:49.168980 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:49.168989 | orchestrator | Sunday 22 June 2025 20:03:08 +0000 (0:00:00.288) 0:00:05.429 *********** 2025-06-22 20:04:49.168998 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:49.169007 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:49.169016 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:49.169024 | orchestrator | 2025-06-22 20:04:49.169033 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:49.169042 | orchestrator | Sunday 22 June 2025 20:03:08 +0000 (0:00:00.297) 0:00:05.726 *********** 2025-06-22 20:04:49.169051 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169059 | orchestrator | 2025-06-22 20:04:49.169068 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:49.169077 | orchestrator | Sunday 22 June 2025 20:03:09 +0000 (0:00:00.342) 0:00:06.069 *********** 2025-06-22 20:04:49.169086 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169095 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.169122 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.169131 | orchestrator | 2025-06-22 20:04:49.169140 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:49.169148 | orchestrator | Sunday 22 June 2025 20:03:09 +0000 (0:00:00.289) 0:00:06.359 *********** 2025-06-22 20:04:49.169157 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:49.169166 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:49.169175 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:49.169183 | orchestrator | 2025-06-22 20:04:49.169192 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:49.169205 | orchestrator | Sunday 22 June 2025 20:03:09 +0000 (0:00:00.271) 0:00:06.630 *********** 2025-06-22 20:04:49.169214 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169223 | orchestrator | 2025-06-22 20:04:49.169231 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:49.169246 | orchestrator | Sunday 22 June 2025 20:03:09 +0000 (0:00:00.122) 0:00:06.753 *********** 2025-06-22 20:04:49.169255 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169263 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.169272 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.169281 | orchestrator | 2025-06-22 20:04:49.169289 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:49.169298 | orchestrator | Sunday 22 June 2025 20:03:10 +0000 (0:00:00.260) 0:00:07.013 *********** 2025-06-22 20:04:49.169307 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:49.169316 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:49.169324 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:49.169333 | orchestrator | 2025-06-22 20:04:49.169342 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:49.169351 | orchestrator | Sunday 22 June 2025 20:03:10 +0000 (0:00:00.403) 0:00:07.416 *********** 2025-06-22 20:04:49.169359 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169368 | orchestrator | 2025-06-22 20:04:49.169377 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:49.169386 | orchestrator | Sunday 22 June 2025 20:03:10 +0000 (0:00:00.103) 0:00:07.520 *********** 2025-06-22 20:04:49.169394 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169403 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.169412 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.169420 | orchestrator | 2025-06-22 20:04:49.169429 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:49.169438 | orchestrator | Sunday 22 June 2025 20:03:10 +0000 (0:00:00.244) 0:00:07.765 *********** 2025-06-22 20:04:49.169446 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:49.169455 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:49.169464 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:49.169473 | orchestrator | 2025-06-22 20:04:49.169481 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:49.169490 | orchestrator | Sunday 22 June 2025 20:03:11 +0000 (0:00:00.260) 0:00:08.025 *********** 2025-06-22 20:04:49.169499 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169508 | orchestrator | 2025-06-22 20:04:49.169516 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:49.169525 | orchestrator | Sunday 22 June 2025 20:03:11 +0000 (0:00:00.110) 0:00:08.135 *********** 2025-06-22 20:04:49.169534 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169542 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.169551 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.169560 | orchestrator | 2025-06-22 20:04:49.169568 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:49.169582 | orchestrator | Sunday 22 June 2025 20:03:11 +0000 (0:00:00.385) 0:00:08.521 *********** 2025-06-22 20:04:49.169591 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:49.169600 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:49.169609 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:49.169617 | orchestrator | 2025-06-22 20:04:49.169626 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:49.169635 | orchestrator | Sunday 22 June 2025 20:03:11 +0000 (0:00:00.299) 0:00:08.821 *********** 2025-06-22 20:04:49.169643 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169652 | orchestrator | 2025-06-22 20:04:49.169661 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:49.169670 | orchestrator | Sunday 22 June 2025 20:03:11 +0000 (0:00:00.107) 0:00:08.929 *********** 2025-06-22 20:04:49.169678 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169687 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.169696 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.169704 | orchestrator | 2025-06-22 20:04:49.169713 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:49.169727 | orchestrator | Sunday 22 June 2025 20:03:12 +0000 (0:00:00.238) 0:00:09.168 *********** 2025-06-22 20:04:49.169736 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:49.169745 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:49.169754 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:49.169762 | orchestrator | 2025-06-22 20:04:49.169771 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:49.169780 | orchestrator | Sunday 22 June 2025 20:03:12 +0000 (0:00:00.266) 0:00:09.434 *********** 2025-06-22 20:04:49.169789 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169798 | orchestrator | 2025-06-22 20:04:49.169806 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:49.169815 | orchestrator | Sunday 22 June 2025 20:03:12 +0000 (0:00:00.121) 0:00:09.555 *********** 2025-06-22 20:04:49.169824 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169832 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.169841 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.169850 | orchestrator | 2025-06-22 20:04:49.169858 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:49.169867 | orchestrator | Sunday 22 June 2025 20:03:12 +0000 (0:00:00.380) 0:00:09.935 *********** 2025-06-22 20:04:49.169876 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:49.169885 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:49.169893 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:49.169902 | orchestrator | 2025-06-22 20:04:49.169911 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:49.169919 | orchestrator | Sunday 22 June 2025 20:03:13 +0000 (0:00:00.361) 0:00:10.297 *********** 2025-06-22 20:04:49.169928 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169937 | orchestrator | 2025-06-22 20:04:49.169945 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:49.169954 | orchestrator | Sunday 22 June 2025 20:03:13 +0000 (0:00:00.123) 0:00:10.420 *********** 2025-06-22 20:04:49.169963 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.169971 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.169980 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.169989 | orchestrator | 2025-06-22 20:04:49.170002 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:49.170011 | orchestrator | Sunday 22 June 2025 20:03:13 +0000 (0:00:00.251) 0:00:10.671 *********** 2025-06-22 20:04:49.170063 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:49.170072 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:49.170081 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:49.170089 | orchestrator | 2025-06-22 20:04:49.170098 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:49.170142 | orchestrator | Sunday 22 June 2025 20:03:14 +0000 (0:00:00.583) 0:00:11.255 *********** 2025-06-22 20:04:49.170151 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.170160 | orchestrator | 2025-06-22 20:04:49.170168 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:49.170177 | orchestrator | Sunday 22 June 2025 20:03:14 +0000 (0:00:00.107) 0:00:11.363 *********** 2025-06-22 20:04:49.170186 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.170195 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.170203 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.170212 | orchestrator | 2025-06-22 20:04:49.170221 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-22 20:04:49.170229 | orchestrator | Sunday 22 June 2025 20:03:14 +0000 (0:00:00.254) 0:00:11.617 *********** 2025-06-22 20:04:49.170238 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:49.170247 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:04:49.170255 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:04:49.170264 | orchestrator | 2025-06-22 20:04:49.170273 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-22 20:04:49.170281 | orchestrator | Sunday 22 June 2025 20:03:16 +0000 (0:00:01.482) 0:00:13.100 *********** 2025-06-22 20:04:49.170297 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-22 20:04:49.170306 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-22 20:04:49.170315 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-22 20:04:49.170324 | orchestrator | 2025-06-22 20:04:49.170333 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-22 20:04:49.170341 | orchestrator | Sunday 22 June 2025 20:03:17 +0000 (0:00:01.660) 0:00:14.761 *********** 2025-06-22 20:04:49.170350 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-22 20:04:49.170360 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-22 20:04:49.170369 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-22 20:04:49.170377 | orchestrator | 2025-06-22 20:04:49.170386 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-22 20:04:49.170401 | orchestrator | Sunday 22 June 2025 20:03:20 +0000 (0:00:02.290) 0:00:17.051 *********** 2025-06-22 20:04:49.170410 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-22 20:04:49.170418 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-22 20:04:49.170427 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-22 20:04:49.170436 | orchestrator | 2025-06-22 20:04:49.170445 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-22 20:04:49.170453 | orchestrator | Sunday 22 June 2025 20:03:21 +0000 (0:00:01.540) 0:00:18.591 *********** 2025-06-22 20:04:49.170462 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.170471 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.170479 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.170488 | orchestrator | 2025-06-22 20:04:49.170497 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-22 20:04:49.170505 | orchestrator | Sunday 22 June 2025 20:03:21 +0000 (0:00:00.287) 0:00:18.879 *********** 2025-06-22 20:04:49.170514 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.170523 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.170531 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.170540 | orchestrator | 2025-06-22 20:04:49.170549 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:04:49.170557 | orchestrator | Sunday 22 June 2025 20:03:22 +0000 (0:00:00.243) 0:00:19.122 *********** 2025-06-22 20:04:49.170566 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:04:49.170575 | orchestrator | 2025-06-22 20:04:49.170584 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-22 20:04:49.170592 | orchestrator | Sunday 22 June 2025 20:03:22 +0000 (0:00:00.675) 0:00:19.798 *********** 2025-06-22 20:04:49.170610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:49.170635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:49.170657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:49.170672 | orchestrator | 2025-06-22 20:04:49.170682 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-22 20:04:49.170690 | orchestrator | Sunday 22 June 2025 20:03:24 +0000 (0:00:01.438) 0:00:21.237 *********** 2025-06-22 20:04:49.170707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:04:49.170722 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.170742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:04:49.170752 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.170766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:04:49.170782 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.170791 | orchestrator | 2025-06-22 20:04:49.170799 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-22 20:04:49.170808 | orchestrator | Sunday 22 June 2025 20:03:24 +0000 (0:00:00.535) 0:00:21.772 *********** 2025-06-22 20:04:49.170824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:04:49.170833 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.170848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:04:49.170863 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.170879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:04:49.170888 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.170897 | orchestrator | 2025-06-22 20:04:49.170906 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-22 20:04:49.170914 | orchestrator | Sunday 22 June 2025 20:03:25 +0000 (0:00:01.109) 0:00:22.882 *********** 2025-06-22 20:04:49.170929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:49.170951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:49.170967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:49.170982 | orchestrator | 2025-06-22 20:04:49.170991 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:04:49.171000 | orchestrator | Sunday 22 June 2025 20:03:27 +0000 (0:00:01.206) 0:00:24.088 *********** 2025-06-22 20:04:49.171008 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:49.171017 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:49.171026 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:49.171034 | orchestrator | 2025-06-22 20:04:49.171043 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:04:49.171052 | orchestrator | Sunday 22 June 2025 20:03:27 +0000 (0:00:00.261) 0:00:24.350 *********** 2025-06-22 20:04:49.171061 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:04:49.171069 | orchestrator | 2025-06-22 20:04:49.171078 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-22 20:04:49.171091 | orchestrator | Sunday 22 June 2025 20:03:27 +0000 (0:00:00.586) 0:00:24.936 *********** 2025-06-22 20:04:49.171100 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:49.171148 | orchestrator | 2025-06-22 20:04:49.171157 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-22 20:04:49.171166 | orchestrator | Sunday 22 June 2025 20:03:30 +0000 (0:00:02.302) 0:00:27.238 *********** 2025-06-22 20:04:49.171175 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:49.171184 | orchestrator | 2025-06-22 20:04:49.171193 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-22 20:04:49.171201 | orchestrator | Sunday 22 June 2025 20:03:32 +0000 (0:00:02.142) 0:00:29.380 *********** 2025-06-22 20:04:49.171210 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:49.171219 | orchestrator | 2025-06-22 20:04:49.171228 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-22 20:04:49.171236 | orchestrator | Sunday 22 June 2025 20:03:47 +0000 (0:00:15.513) 0:00:44.894 *********** 2025-06-22 20:04:49.171245 | orchestrator | 2025-06-22 20:04:49.171262 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-22 20:04:49.171271 | orchestrator | Sunday 22 June 2025 20:03:47 +0000 (0:00:00.059) 0:00:44.953 *********** 2025-06-22 20:04:49.171280 | orchestrator | 2025-06-22 20:04:49.171288 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-22 20:04:49.171297 | orchestrator | Sunday 22 June 2025 20:03:48 +0000 (0:00:00.060) 0:00:45.014 *********** 2025-06-22 20:04:49.171305 | orchestrator | 2025-06-22 20:04:49.171314 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-22 20:04:49.171323 | orchestrator | Sunday 22 June 2025 20:03:48 +0000 (0:00:00.061) 0:00:45.075 *********** 2025-06-22 20:04:49.171331 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:49.171340 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:04:49.171349 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:04:49.171358 | orchestrator | 2025-06-22 20:04:49.171366 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:04:49.171375 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-22 20:04:49.171384 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-22 20:04:49.171393 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-22 20:04:49.171402 | orchestrator | 2025-06-22 20:04:49.171411 | orchestrator | 2025-06-22 20:04:49.171419 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:04:49.171428 | orchestrator | Sunday 22 June 2025 20:04:46 +0000 (0:00:58.261) 0:01:43.337 *********** 2025-06-22 20:04:49.171437 | orchestrator | =============================================================================== 2025-06-22 20:04:49.171445 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.26s 2025-06-22 20:04:49.171459 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.51s 2025-06-22 20:04:49.171468 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.30s 2025-06-22 20:04:49.171476 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.29s 2025-06-22 20:04:49.171485 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.14s 2025-06-22 20:04:49.171494 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.66s 2025-06-22 20:04:49.171502 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.54s 2025-06-22 20:04:49.171511 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.48s 2025-06-22 20:04:49.171520 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.44s 2025-06-22 20:04:49.171528 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.21s 2025-06-22 20:04:49.171537 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.11s 2025-06-22 20:04:49.171546 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.10s 2025-06-22 20:04:49.171554 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-06-22 20:04:49.171563 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.68s 2025-06-22 20:04:49.171572 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2025-06-22 20:04:49.171580 | orchestrator | horizon : Update policy file name --------------------------------------- 0.58s 2025-06-22 20:04:49.171589 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.54s 2025-06-22 20:04:49.171598 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.51s 2025-06-22 20:04:49.171606 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.49s 2025-06-22 20:04:49.171621 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.45s 2025-06-22 20:04:49.171630 | orchestrator | 2025-06-22 20:04:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:52.216362 | orchestrator | 2025-06-22 20:04:52 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:52.218335 | orchestrator | 2025-06-22 20:04:52 | INFO  | Task 3b54d1fd-40dd-45d3-9854-eaebc112cc72 is in state STARTED 2025-06-22 20:04:52.218378 | orchestrator | 2025-06-22 20:04:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:55.264584 | orchestrator | 2025-06-22 20:04:55 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:55.266679 | orchestrator | 2025-06-22 20:04:55 | INFO  | Task 3b54d1fd-40dd-45d3-9854-eaebc112cc72 is in state STARTED 2025-06-22 20:04:55.266772 | orchestrator | 2025-06-22 20:04:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:58.316481 | orchestrator | 2025-06-22 20:04:58 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:04:58.317813 | orchestrator | 2025-06-22 20:04:58 | INFO  | Task 3b54d1fd-40dd-45d3-9854-eaebc112cc72 is in state STARTED 2025-06-22 20:04:58.317860 | orchestrator | 2025-06-22 20:04:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:01.366311 | orchestrator | 2025-06-22 20:05:01 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:01.367664 | orchestrator | 2025-06-22 20:05:01 | INFO  | Task 3b54d1fd-40dd-45d3-9854-eaebc112cc72 is in state STARTED 2025-06-22 20:05:01.367715 | orchestrator | 2025-06-22 20:05:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:04.422642 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:04.423010 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:04.425900 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task 3b54d1fd-40dd-45d3-9854-eaebc112cc72 is in state SUCCESS 2025-06-22 20:05:04.425962 | orchestrator | 2025-06-22 20:05:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:07.477592 | orchestrator | 2025-06-22 20:05:07 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:07.479563 | orchestrator | 2025-06-22 20:05:07 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:07.480491 | orchestrator | 2025-06-22 20:05:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:10.524468 | orchestrator | 2025-06-22 20:05:10 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:10.525691 | orchestrator | 2025-06-22 20:05:10 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:10.526328 | orchestrator | 2025-06-22 20:05:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:13.557536 | orchestrator | 2025-06-22 20:05:13 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:13.558558 | orchestrator | 2025-06-22 20:05:13 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:13.558593 | orchestrator | 2025-06-22 20:05:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:16.606311 | orchestrator | 2025-06-22 20:05:16 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:16.606722 | orchestrator | 2025-06-22 20:05:16 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:16.606752 | orchestrator | 2025-06-22 20:05:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:19.647257 | orchestrator | 2025-06-22 20:05:19 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:19.647974 | orchestrator | 2025-06-22 20:05:19 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:19.648007 | orchestrator | 2025-06-22 20:05:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:22.694014 | orchestrator | 2025-06-22 20:05:22 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:22.696393 | orchestrator | 2025-06-22 20:05:22 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:22.696441 | orchestrator | 2025-06-22 20:05:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:25.735155 | orchestrator | 2025-06-22 20:05:25 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:25.736123 | orchestrator | 2025-06-22 20:05:25 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:25.736154 | orchestrator | 2025-06-22 20:05:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:28.777703 | orchestrator | 2025-06-22 20:05:28 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:28.778687 | orchestrator | 2025-06-22 20:05:28 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:28.778928 | orchestrator | 2025-06-22 20:05:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:31.823092 | orchestrator | 2025-06-22 20:05:31 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:31.824951 | orchestrator | 2025-06-22 20:05:31 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:31.825282 | orchestrator | 2025-06-22 20:05:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:34.867694 | orchestrator | 2025-06-22 20:05:34 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:34.868872 | orchestrator | 2025-06-22 20:05:34 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:34.868905 | orchestrator | 2025-06-22 20:05:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:37.911497 | orchestrator | 2025-06-22 20:05:37 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:37.912771 | orchestrator | 2025-06-22 20:05:37 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:37.912803 | orchestrator | 2025-06-22 20:05:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:40.953922 | orchestrator | 2025-06-22 20:05:40 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state STARTED 2025-06-22 20:05:40.955899 | orchestrator | 2025-06-22 20:05:40 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:40.955956 | orchestrator | 2025-06-22 20:05:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:43.990402 | orchestrator | 2025-06-22 20:05:43 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:05:43.994360 | orchestrator | 2025-06-22 20:05:43 | INFO  | Task ddfc0a2b-061b-4bc7-b9ca-669e8766cf71 is in state SUCCESS 2025-06-22 20:05:43.996297 | orchestrator | 2025-06-22 20:05:43.996661 | orchestrator | 2025-06-22 20:05:43.996676 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-22 20:05:43.996689 | orchestrator | 2025-06-22 20:05:43.996701 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-22 20:05:43.996740 | orchestrator | Sunday 22 June 2025 20:04:37 +0000 (0:00:00.144) 0:00:00.144 *********** 2025-06-22 20:05:43.996753 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-22 20:05:43.996765 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-22 20:05:43.996790 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-22 20:05:43.996802 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:05:43.996813 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-22 20:05:43.996824 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-22 20:05:43.996835 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-22 20:05:43.996846 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-22 20:05:43.996857 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-22 20:05:43.996868 | orchestrator | 2025-06-22 20:05:43.996946 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-22 20:05:43.996963 | orchestrator | Sunday 22 June 2025 20:04:41 +0000 (0:00:03.890) 0:00:04.034 *********** 2025-06-22 20:05:43.996976 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 20:05:43.996987 | orchestrator | 2025-06-22 20:05:43.996999 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-22 20:05:43.997010 | orchestrator | Sunday 22 June 2025 20:04:42 +0000 (0:00:00.898) 0:00:04.933 *********** 2025-06-22 20:05:43.997021 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-22 20:05:43.997032 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-22 20:05:43.997069 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-22 20:05:43.997080 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:05:43.997092 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-22 20:05:43.997103 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-22 20:05:43.997114 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-22 20:05:43.997124 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-22 20:05:43.997135 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-22 20:05:43.997146 | orchestrator | 2025-06-22 20:05:43.997157 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-22 20:05:43.997168 | orchestrator | Sunday 22 June 2025 20:04:54 +0000 (0:00:12.404) 0:00:17.337 *********** 2025-06-22 20:05:43.997180 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-22 20:05:43.997191 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-22 20:05:43.997202 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-22 20:05:43.997212 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:05:43.997223 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-22 20:05:43.997234 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-22 20:05:43.997245 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-22 20:05:43.997256 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-22 20:05:43.997267 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-22 20:05:43.997292 | orchestrator | 2025-06-22 20:05:43.997303 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:05:43.997314 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:05:43.997327 | orchestrator | 2025-06-22 20:05:43.997338 | orchestrator | 2025-06-22 20:05:43.997349 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:05:43.997360 | orchestrator | Sunday 22 June 2025 20:05:01 +0000 (0:00:06.455) 0:00:23.793 *********** 2025-06-22 20:05:43.997370 | orchestrator | =============================================================================== 2025-06-22 20:05:43.997381 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.40s 2025-06-22 20:05:43.997392 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.46s 2025-06-22 20:05:43.997403 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.89s 2025-06-22 20:05:43.997413 | orchestrator | Create share directory -------------------------------------------------- 0.90s 2025-06-22 20:05:43.997424 | orchestrator | 2025-06-22 20:05:43.997435 | orchestrator | 2025-06-22 20:05:43.997446 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:05:43.997457 | orchestrator | 2025-06-22 20:05:43.997510 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:05:43.997523 | orchestrator | Sunday 22 June 2025 20:03:03 +0000 (0:00:00.263) 0:00:00.263 *********** 2025-06-22 20:05:43.997534 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:43.997545 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:43.997556 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:43.997567 | orchestrator | 2025-06-22 20:05:43.997578 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:05:43.997589 | orchestrator | Sunday 22 June 2025 20:03:03 +0000 (0:00:00.308) 0:00:00.571 *********** 2025-06-22 20:05:43.997603 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-22 20:05:43.997624 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-22 20:05:43.997637 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-22 20:05:43.997650 | orchestrator | 2025-06-22 20:05:43.997663 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-22 20:05:43.997675 | orchestrator | 2025-06-22 20:05:43.997687 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:05:43.997698 | orchestrator | Sunday 22 June 2025 20:03:04 +0000 (0:00:00.414) 0:00:00.986 *********** 2025-06-22 20:05:43.997711 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:05:43.997794 | orchestrator | 2025-06-22 20:05:43.997811 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-22 20:05:43.997825 | orchestrator | Sunday 22 June 2025 20:03:04 +0000 (0:00:00.572) 0:00:01.558 *********** 2025-06-22 20:05:43.997842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.997869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.997917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.997939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:43.997955 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:43.997968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:43.997981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.998002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.998013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.998093 | orchestrator | 2025-06-22 20:05:43.998105 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-22 20:05:43.998116 | orchestrator | Sunday 22 June 2025 20:03:06 +0000 (0:00:01.848) 0:00:03.407 *********** 2025-06-22 20:05:43.998128 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-22 20:05:43.998139 | orchestrator | 2025-06-22 20:05:43.998150 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-22 20:05:43.998167 | orchestrator | Sunday 22 June 2025 20:03:07 +0000 (0:00:00.863) 0:00:04.270 *********** 2025-06-22 20:05:43.998179 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:43.998190 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:43.998201 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:43.998211 | orchestrator | 2025-06-22 20:05:43.998222 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-22 20:05:43.998233 | orchestrator | Sunday 22 June 2025 20:03:07 +0000 (0:00:00.470) 0:00:04.741 *********** 2025-06-22 20:05:43.998245 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:05:43.998256 | orchestrator | 2025-06-22 20:05:43.998267 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:05:43.998283 | orchestrator | Sunday 22 June 2025 20:03:08 +0000 (0:00:00.683) 0:00:05.424 *********** 2025-06-22 20:05:43.998295 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:05:43.998306 | orchestrator | 2025-06-22 20:05:43.998317 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-22 20:05:43.998327 | orchestrator | Sunday 22 June 2025 20:03:09 +0000 (0:00:00.528) 0:00:05.953 *********** 2025-06-22 20:05:43.998339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.998360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.998373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.998397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:43.998415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:43.998427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:43.998446 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.998457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.998469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.998480 | orchestrator | 2025-06-22 20:05:43.998491 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-22 20:05:43.998502 | orchestrator | Sunday 22 June 2025 20:03:12 +0000 (0:00:03.369) 0:00:09.322 *********** 2025-06-22 20:05:43.998527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:05:43.998540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:43.998558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:05:43.998571 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:43.998583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:05:43.998595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:43.998607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:05:43.998618 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:43.998643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:05:43.998667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:43.998679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:05:43.998826 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:43.998842 | orchestrator | 2025-06-22 20:05:43.998860 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-22 20:05:43.998878 | orchestrator | Sunday 22 June 2025 20:03:12 +0000 (0:00:00.468) 0:00:09.791 *********** 2025-06-22 20:05:43.998891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:05:43.998904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:43.998931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:05:43.998964 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:43.998976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:05:43.998989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:43.999000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:05:43.999011 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:43.999023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:05:43.999063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:43.999088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:05:43.999099 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:43.999110 | orchestrator | 2025-06-22 20:05:43.999122 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-22 20:05:43.999133 | orchestrator | Sunday 22 June 2025 20:03:13 +0000 (0:00:00.761) 0:00:10.553 *********** 2025-06-22 20:05:43.999144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.999157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.999177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.999201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:43.999213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:43.999225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:43.999236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.999248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.999259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.999270 | orchestrator | 2025-06-22 20:05:43.999281 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-22 20:05:43.999299 | orchestrator | Sunday 22 June 2025 20:03:17 +0000 (0:00:03.789) 0:00:14.342 *********** 2025-06-22 20:05:43.999319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.999331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:43.999344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.999394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:43.999414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.999437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:43.999449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.999461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.999474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.999486 | orchestrator | 2025-06-22 20:05:43.999499 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-22 20:05:43.999512 | orchestrator | Sunday 22 June 2025 20:03:22 +0000 (0:00:04.900) 0:00:19.242 *********** 2025-06-22 20:05:43.999525 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:43.999537 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:43.999550 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:43.999563 | orchestrator | 2025-06-22 20:05:43.999575 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-22 20:05:43.999587 | orchestrator | Sunday 22 June 2025 20:03:23 +0000 (0:00:01.317) 0:00:20.560 *********** 2025-06-22 20:05:43.999599 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:43.999612 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:43.999624 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:43.999636 | orchestrator | 2025-06-22 20:05:43.999649 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-22 20:05:43.999661 | orchestrator | Sunday 22 June 2025 20:03:24 +0000 (0:00:00.510) 0:00:21.070 *********** 2025-06-22 20:05:43.999679 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:43.999692 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:43.999704 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:43.999716 | orchestrator | 2025-06-22 20:05:43.999729 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-22 20:05:43.999742 | orchestrator | Sunday 22 June 2025 20:03:24 +0000 (0:00:00.391) 0:00:21.461 *********** 2025-06-22 20:05:43.999754 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:43.999766 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:43.999779 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:43.999791 | orchestrator | 2025-06-22 20:05:43.999803 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-22 20:05:43.999815 | orchestrator | Sunday 22 June 2025 20:03:24 +0000 (0:00:00.262) 0:00:21.724 *********** 2025-06-22 20:05:43.999840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.999853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:43.999866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.999878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:43.999896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:43.999916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:43.999933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.999945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.999956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:43.999968 | orchestrator | 2025-06-22 20:05:43.999986 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:05:44.000005 | orchestrator | Sunday 22 June 2025 20:03:27 +0000 (0:00:02.386) 0:00:24.111 *********** 2025-06-22 20:05:44.000033 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:44.000077 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:44.000088 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:44.000099 | orchestrator | 2025-06-22 20:05:44.000110 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-22 20:05:44.000121 | orchestrator | Sunday 22 June 2025 20:03:27 +0000 (0:00:00.268) 0:00:24.379 *********** 2025-06-22 20:05:44.000132 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-22 20:05:44.000143 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-22 20:05:44.000154 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-22 20:05:44.000165 | orchestrator | 2025-06-22 20:05:44.000176 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-22 20:05:44.000187 | orchestrator | Sunday 22 June 2025 20:03:29 +0000 (0:00:01.694) 0:00:26.074 *********** 2025-06-22 20:05:44.000197 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:05:44.000208 | orchestrator | 2025-06-22 20:05:44.000219 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-22 20:05:44.000230 | orchestrator | Sunday 22 June 2025 20:03:30 +0000 (0:00:00.804) 0:00:26.879 *********** 2025-06-22 20:05:44.000241 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:44.000252 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:44.000262 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:44.000273 | orchestrator | 2025-06-22 20:05:44.000284 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-22 20:05:44.000295 | orchestrator | Sunday 22 June 2025 20:03:30 +0000 (0:00:00.451) 0:00:27.330 *********** 2025-06-22 20:05:44.000306 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 20:05:44.000316 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:05:44.000327 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 20:05:44.000338 | orchestrator | 2025-06-22 20:05:44.000348 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-22 20:05:44.000360 | orchestrator | Sunday 22 June 2025 20:03:31 +0000 (0:00:00.938) 0:00:28.269 *********** 2025-06-22 20:05:44.000370 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:44.000381 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:44.000392 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:44.000403 | orchestrator | 2025-06-22 20:05:44.000420 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-22 20:05:44.000431 | orchestrator | Sunday 22 June 2025 20:03:31 +0000 (0:00:00.270) 0:00:28.539 *********** 2025-06-22 20:05:44.000442 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-22 20:05:44.000453 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-22 20:05:44.000463 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-22 20:05:44.000474 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-22 20:05:44.000491 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-22 20:05:44.000502 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-22 20:05:44.000513 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-22 20:05:44.000524 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-22 20:05:44.000534 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-22 20:05:44.000545 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-22 20:05:44.000563 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-22 20:05:44.000573 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-22 20:05:44.000584 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-22 20:05:44.000595 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-22 20:05:44.000606 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-22 20:05:44.000616 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:05:44.000628 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:05:44.000638 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:05:44.000650 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:05:44.000660 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:05:44.000671 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:05:44.000682 | orchestrator | 2025-06-22 20:05:44.000692 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-22 20:05:44.000703 | orchestrator | Sunday 22 June 2025 20:03:40 +0000 (0:00:08.781) 0:00:37.321 *********** 2025-06-22 20:05:44.000714 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:05:44.000725 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:05:44.000735 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:05:44.000746 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:05:44.000757 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:05:44.000768 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:05:44.000778 | orchestrator | 2025-06-22 20:05:44.000789 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-22 20:05:44.000800 | orchestrator | Sunday 22 June 2025 20:03:42 +0000 (0:00:02.519) 0:00:39.840 *********** 2025-06-22 20:05:44.000817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:44.000835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:44.000854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:44.000867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:44.000879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:44.000890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:44.000909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:44.000931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:44.000943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:44.000954 | orchestrator | 2025-06-22 20:05:44.000965 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:05:44.000976 | orchestrator | Sunday 22 June 2025 20:03:45 +0000 (0:00:02.461) 0:00:42.302 *********** 2025-06-22 20:05:44.000987 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:44.000998 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:44.001009 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:44.001020 | orchestrator | 2025-06-22 20:05:44.001031 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-22 20:05:44.001061 | orchestrator | Sunday 22 June 2025 20:03:45 +0000 (0:00:00.291) 0:00:42.594 *********** 2025-06-22 20:05:44.001072 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:44.001082 | orchestrator | 2025-06-22 20:05:44.001093 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-22 20:05:44.001104 | orchestrator | Sunday 22 June 2025 20:03:47 +0000 (0:00:02.167) 0:00:44.761 *********** 2025-06-22 20:05:44.001115 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:44.001125 | orchestrator | 2025-06-22 20:05:44.001136 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-22 20:05:44.001147 | orchestrator | Sunday 22 June 2025 20:03:50 +0000 (0:00:02.352) 0:00:47.114 *********** 2025-06-22 20:05:44.001158 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:44.001169 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:44.001180 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:44.001190 | orchestrator | 2025-06-22 20:05:44.001201 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-22 20:05:44.001212 | orchestrator | Sunday 22 June 2025 20:03:51 +0000 (0:00:00.824) 0:00:47.939 *********** 2025-06-22 20:05:44.001223 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:44.001233 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:44.001244 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:44.001254 | orchestrator | 2025-06-22 20:05:44.001265 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-22 20:05:44.001276 | orchestrator | Sunday 22 June 2025 20:03:51 +0000 (0:00:00.274) 0:00:48.213 *********** 2025-06-22 20:05:44.001287 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:44.001298 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:44.001308 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:44.001319 | orchestrator | 2025-06-22 20:05:44.001330 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-22 20:05:44.001341 | orchestrator | Sunday 22 June 2025 20:03:51 +0000 (0:00:00.293) 0:00:48.507 *********** 2025-06-22 20:05:44.001351 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:44.001368 | orchestrator | 2025-06-22 20:05:44.001380 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-22 20:05:44.001391 | orchestrator | Sunday 22 June 2025 20:04:06 +0000 (0:00:14.562) 0:01:03.070 *********** 2025-06-22 20:05:44.001401 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:44.001412 | orchestrator | 2025-06-22 20:05:44.001423 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-22 20:05:44.001434 | orchestrator | Sunday 22 June 2025 20:04:16 +0000 (0:00:10.413) 0:01:13.484 *********** 2025-06-22 20:05:44.001444 | orchestrator | 2025-06-22 20:05:44.001455 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-22 20:05:44.001466 | orchestrator | Sunday 22 June 2025 20:04:16 +0000 (0:00:00.217) 0:01:13.701 *********** 2025-06-22 20:05:44.001476 | orchestrator | 2025-06-22 20:05:44.001487 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-22 20:05:44.001498 | orchestrator | Sunday 22 June 2025 20:04:16 +0000 (0:00:00.057) 0:01:13.759 *********** 2025-06-22 20:05:44.001509 | orchestrator | 2025-06-22 20:05:44.001520 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-22 20:05:44.001536 | orchestrator | Sunday 22 June 2025 20:04:16 +0000 (0:00:00.062) 0:01:13.821 *********** 2025-06-22 20:05:44.001547 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:44.001558 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:44.001569 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:44.001580 | orchestrator | 2025-06-22 20:05:44.001590 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-22 20:05:44.001601 | orchestrator | Sunday 22 June 2025 20:04:31 +0000 (0:00:14.624) 0:01:28.445 *********** 2025-06-22 20:05:44.001612 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:44.001623 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:44.001633 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:44.001644 | orchestrator | 2025-06-22 20:05:44.001660 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-22 20:05:44.001671 | orchestrator | Sunday 22 June 2025 20:04:41 +0000 (0:00:09.985) 0:01:38.431 *********** 2025-06-22 20:05:44.001682 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:44.001692 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:44.001703 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:44.001714 | orchestrator | 2025-06-22 20:05:44.001725 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:05:44.001736 | orchestrator | Sunday 22 June 2025 20:04:52 +0000 (0:00:10.928) 0:01:49.360 *********** 2025-06-22 20:05:44.001747 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:05:44.001757 | orchestrator | 2025-06-22 20:05:44.001768 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-22 20:05:44.001779 | orchestrator | Sunday 22 June 2025 20:04:53 +0000 (0:00:00.713) 0:01:50.073 *********** 2025-06-22 20:05:44.001790 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:44.001800 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:44.001811 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:44.001822 | orchestrator | 2025-06-22 20:05:44.001833 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-22 20:05:44.001844 | orchestrator | Sunday 22 June 2025 20:04:53 +0000 (0:00:00.724) 0:01:50.797 *********** 2025-06-22 20:05:44.001855 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:44.001865 | orchestrator | 2025-06-22 20:05:44.001876 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-22 20:05:44.001887 | orchestrator | Sunday 22 June 2025 20:04:55 +0000 (0:00:01.709) 0:01:52.507 *********** 2025-06-22 20:05:44.001898 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-22 20:05:44.001909 | orchestrator | 2025-06-22 20:05:44.001919 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-22 20:05:44.001936 | orchestrator | Sunday 22 June 2025 20:05:06 +0000 (0:00:11.081) 0:02:03.588 *********** 2025-06-22 20:05:44.001947 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-22 20:05:44.001958 | orchestrator | 2025-06-22 20:05:44.001968 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-22 20:05:44.001979 | orchestrator | Sunday 22 June 2025 20:05:29 +0000 (0:00:22.877) 0:02:26.466 *********** 2025-06-22 20:05:44.001990 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-22 20:05:44.002001 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-22 20:05:44.002011 | orchestrator | 2025-06-22 20:05:44.002136 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-22 20:05:44.002148 | orchestrator | Sunday 22 June 2025 20:05:36 +0000 (0:00:06.962) 0:02:33.428 *********** 2025-06-22 20:05:44.002159 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:44.002170 | orchestrator | 2025-06-22 20:05:44.002181 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-22 20:05:44.002192 | orchestrator | Sunday 22 June 2025 20:05:36 +0000 (0:00:00.218) 0:02:33.647 *********** 2025-06-22 20:05:44.002203 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:44.002213 | orchestrator | 2025-06-22 20:05:44.002224 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-22 20:05:44.002235 | orchestrator | Sunday 22 June 2025 20:05:36 +0000 (0:00:00.117) 0:02:33.764 *********** 2025-06-22 20:05:44.002246 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:44.002257 | orchestrator | 2025-06-22 20:05:44.002268 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-22 20:05:44.002279 | orchestrator | Sunday 22 June 2025 20:05:37 +0000 (0:00:00.111) 0:02:33.876 *********** 2025-06-22 20:05:44.002290 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:44.002301 | orchestrator | 2025-06-22 20:05:44.002312 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-22 20:05:44.002323 | orchestrator | Sunday 22 June 2025 20:05:37 +0000 (0:00:00.285) 0:02:34.161 *********** 2025-06-22 20:05:44.002333 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:44.002344 | orchestrator | 2025-06-22 20:05:44.002355 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:05:44.002366 | orchestrator | Sunday 22 June 2025 20:05:40 +0000 (0:00:03.177) 0:02:37.339 *********** 2025-06-22 20:05:44.002377 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:44.002388 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:44.002399 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:44.002410 | orchestrator | 2025-06-22 20:05:44.002421 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:05:44.002433 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-22 20:05:44.002445 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-22 20:05:44.002463 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-22 20:05:44.002475 | orchestrator | 2025-06-22 20:05:44.002486 | orchestrator | 2025-06-22 20:05:44.002497 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:05:44.002508 | orchestrator | Sunday 22 June 2025 20:05:40 +0000 (0:00:00.490) 0:02:37.830 *********** 2025-06-22 20:05:44.002519 | orchestrator | =============================================================================== 2025-06-22 20:05:44.002529 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.88s 2025-06-22 20:05:44.002540 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.62s 2025-06-22 20:05:44.002551 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.56s 2025-06-22 20:05:44.002575 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.08s 2025-06-22 20:05:44.002587 | orchestrator | keystone : Restart keystone container ---------------------------------- 10.93s 2025-06-22 20:05:44.002598 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.41s 2025-06-22 20:05:44.002608 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.99s 2025-06-22 20:05:44.002619 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.78s 2025-06-22 20:05:44.002629 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.96s 2025-06-22 20:05:44.002639 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.90s 2025-06-22 20:05:44.002648 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.79s 2025-06-22 20:05:44.002658 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.37s 2025-06-22 20:05:44.002668 | orchestrator | keystone : Creating default user role ----------------------------------- 3.18s 2025-06-22 20:05:44.002677 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.52s 2025-06-22 20:05:44.002687 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.46s 2025-06-22 20:05:44.002696 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.39s 2025-06-22 20:05:44.002706 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.35s 2025-06-22 20:05:44.002715 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.17s 2025-06-22 20:05:44.002725 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.85s 2025-06-22 20:05:44.002734 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.71s 2025-06-22 20:05:44.002744 | orchestrator | 2025-06-22 20:05:43 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:05:44.002754 | orchestrator | 2025-06-22 20:05:43 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:05:44.002763 | orchestrator | 2025-06-22 20:05:43 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:05:44.002773 | orchestrator | 2025-06-22 20:05:43 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:44.002783 | orchestrator | 2025-06-22 20:05:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:47.031600 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:05:47.034904 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:05:47.035313 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:05:47.036166 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:05:47.036480 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:47.036520 | orchestrator | 2025-06-22 20:05:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:50.083429 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:05:50.084386 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:05:50.085742 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:05:50.087898 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:05:50.088899 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:50.088932 | orchestrator | 2025-06-22 20:05:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:53.132110 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:05:53.132221 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:05:53.133342 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:05:53.134887 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:05:53.136190 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state STARTED 2025-06-22 20:05:53.136220 | orchestrator | 2025-06-22 20:05:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:56.190492 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:05:56.192371 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:05:56.194360 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:05:56.196308 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:05:56.197813 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:05:56.200004 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task 5fdbf536-171c-402a-a37e-9547543ec6f6 is in state SUCCESS 2025-06-22 20:05:56.200029 | orchestrator | 2025-06-22 20:05:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:59.244287 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:05:59.246770 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:05:59.247836 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:05:59.248961 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:05:59.250420 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:05:59.250460 | orchestrator | 2025-06-22 20:05:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:02.287350 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:02.288661 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:02.288822 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:02.290497 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:02.291183 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:02.291214 | orchestrator | 2025-06-22 20:06:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:05.333766 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:05.333851 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:05.335011 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:05.336151 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:05.337142 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:05.337797 | orchestrator | 2025-06-22 20:06:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:08.373423 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:08.375060 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:08.377388 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:08.379383 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:08.381426 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:08.381468 | orchestrator | 2025-06-22 20:06:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:11.421094 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:11.423146 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:11.425363 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:11.427237 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:11.428115 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:11.428185 | orchestrator | 2025-06-22 20:06:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:14.469223 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:14.472327 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:14.475061 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:14.476896 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:14.478872 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:14.478936 | orchestrator | 2025-06-22 20:06:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:17.521828 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:17.522189 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:17.523481 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:17.524967 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:17.526268 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:17.526293 | orchestrator | 2025-06-22 20:06:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:20.567226 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:20.569649 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:20.571164 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:20.573327 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:20.574665 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:20.575149 | orchestrator | 2025-06-22 20:06:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:23.613446 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:23.613867 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:23.615807 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:23.616555 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:23.617280 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:23.617302 | orchestrator | 2025-06-22 20:06:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:26.649285 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:26.650994 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:26.651716 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:26.652588 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:26.653377 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:26.653487 | orchestrator | 2025-06-22 20:06:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:29.690134 | orchestrator | 2025-06-22 20:06:29 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:29.690992 | orchestrator | 2025-06-22 20:06:29 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:29.693558 | orchestrator | 2025-06-22 20:06:29 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:29.693581 | orchestrator | 2025-06-22 20:06:29 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:29.693608 | orchestrator | 2025-06-22 20:06:29 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:29.693620 | orchestrator | 2025-06-22 20:06:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:32.715332 | orchestrator | 2025-06-22 20:06:32 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:32.715655 | orchestrator | 2025-06-22 20:06:32 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:32.716515 | orchestrator | 2025-06-22 20:06:32 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:32.717235 | orchestrator | 2025-06-22 20:06:32 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:32.717946 | orchestrator | 2025-06-22 20:06:32 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:32.718189 | orchestrator | 2025-06-22 20:06:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:35.748344 | orchestrator | 2025-06-22 20:06:35 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:35.748734 | orchestrator | 2025-06-22 20:06:35 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:35.749431 | orchestrator | 2025-06-22 20:06:35 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:35.750211 | orchestrator | 2025-06-22 20:06:35 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:35.751109 | orchestrator | 2025-06-22 20:06:35 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:35.751150 | orchestrator | 2025-06-22 20:06:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:38.779543 | orchestrator | 2025-06-22 20:06:38 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:38.779850 | orchestrator | 2025-06-22 20:06:38 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:38.781109 | orchestrator | 2025-06-22 20:06:38 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:38.781779 | orchestrator | 2025-06-22 20:06:38 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:38.782713 | orchestrator | 2025-06-22 20:06:38 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:38.782744 | orchestrator | 2025-06-22 20:06:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:41.809071 | orchestrator | 2025-06-22 20:06:41 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:41.809179 | orchestrator | 2025-06-22 20:06:41 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:41.809792 | orchestrator | 2025-06-22 20:06:41 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:41.810197 | orchestrator | 2025-06-22 20:06:41 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:41.811979 | orchestrator | 2025-06-22 20:06:41 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:41.812003 | orchestrator | 2025-06-22 20:06:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:44.840427 | orchestrator | 2025-06-22 20:06:44 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:44.840514 | orchestrator | 2025-06-22 20:06:44 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:44.840954 | orchestrator | 2025-06-22 20:06:44 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:44.841486 | orchestrator | 2025-06-22 20:06:44 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:44.842013 | orchestrator | 2025-06-22 20:06:44 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:44.842120 | orchestrator | 2025-06-22 20:06:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:47.866959 | orchestrator | 2025-06-22 20:06:47 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:47.867578 | orchestrator | 2025-06-22 20:06:47 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:47.869291 | orchestrator | 2025-06-22 20:06:47 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:47.869934 | orchestrator | 2025-06-22 20:06:47 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:47.870521 | orchestrator | 2025-06-22 20:06:47 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:47.870556 | orchestrator | 2025-06-22 20:06:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:50.894426 | orchestrator | 2025-06-22 20:06:50 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:50.894698 | orchestrator | 2025-06-22 20:06:50 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:50.895450 | orchestrator | 2025-06-22 20:06:50 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:50.895660 | orchestrator | 2025-06-22 20:06:50 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:50.896183 | orchestrator | 2025-06-22 20:06:50 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:50.896209 | orchestrator | 2025-06-22 20:06:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:53.927919 | orchestrator | 2025-06-22 20:06:53 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:53.928293 | orchestrator | 2025-06-22 20:06:53 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:53.929214 | orchestrator | 2025-06-22 20:06:53 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:53.929754 | orchestrator | 2025-06-22 20:06:53 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:53.930900 | orchestrator | 2025-06-22 20:06:53 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:53.930936 | orchestrator | 2025-06-22 20:06:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:56.964994 | orchestrator | 2025-06-22 20:06:56 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:56.965231 | orchestrator | 2025-06-22 20:06:56 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:56.965256 | orchestrator | 2025-06-22 20:06:56 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:56.965281 | orchestrator | 2025-06-22 20:06:56 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:56.967717 | orchestrator | 2025-06-22 20:06:56 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:56.967757 | orchestrator | 2025-06-22 20:06:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:59.986376 | orchestrator | 2025-06-22 20:06:59 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:06:59.986681 | orchestrator | 2025-06-22 20:06:59 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:06:59.987195 | orchestrator | 2025-06-22 20:06:59 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:06:59.987907 | orchestrator | 2025-06-22 20:06:59 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:06:59.988698 | orchestrator | 2025-06-22 20:06:59 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:06:59.988710 | orchestrator | 2025-06-22 20:06:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:03.009701 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:03.011149 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:07:03.011526 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:03.012196 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:03.013812 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:03.013846 | orchestrator | 2025-06-22 20:07:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:06.051768 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:06.054761 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:07:06.054848 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:06.054863 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:06.055231 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:06.055254 | orchestrator | 2025-06-22 20:07:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:09.076581 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:09.076669 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:07:09.076843 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:09.077482 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:09.078239 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:09.078265 | orchestrator | 2025-06-22 20:07:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:12.107731 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:12.108178 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:07:12.109484 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:12.110200 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:12.111078 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:12.111105 | orchestrator | 2025-06-22 20:07:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:15.140541 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:15.142359 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:07:15.143522 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:15.144321 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:15.145386 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:15.145438 | orchestrator | 2025-06-22 20:07:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:18.189525 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:18.190701 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:07:18.191725 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:18.192701 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:18.193667 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:18.193686 | orchestrator | 2025-06-22 20:07:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:21.223918 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:21.224501 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:07:21.225290 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:21.227127 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:21.231502 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:21.231564 | orchestrator | 2025-06-22 20:07:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:24.269104 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:24.271615 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:07:24.275384 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:24.278595 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:24.280587 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:24.281257 | orchestrator | 2025-06-22 20:07:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:27.328718 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:27.329924 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:07:27.329984 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:27.332032 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:27.332496 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:27.332534 | orchestrator | 2025-06-22 20:07:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:30.373197 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:30.373411 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state STARTED 2025-06-22 20:07:30.374240 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:30.375743 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:30.376211 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:30.376233 | orchestrator | 2025-06-22 20:07:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:33.401718 | orchestrator | 2025-06-22 20:07:33 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:33.404201 | orchestrator | 2025-06-22 20:07:33 | INFO  | Task b24e8369-32c4-4603-8063-abf8462f683c is in state SUCCESS 2025-06-22 20:07:33.404238 | orchestrator | 2025-06-22 20:07:33.404251 | orchestrator | 2025-06-22 20:07:33.404263 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-22 20:07:33.404275 | orchestrator | 2025-06-22 20:07:33.404286 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-22 20:07:33.404297 | orchestrator | Sunday 22 June 2025 20:05:05 +0000 (0:00:00.237) 0:00:00.237 *********** 2025-06-22 20:07:33.404308 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-22 20:07:33.404320 | orchestrator | 2025-06-22 20:07:33.404331 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-22 20:07:33.404342 | orchestrator | Sunday 22 June 2025 20:05:05 +0000 (0:00:00.308) 0:00:00.545 *********** 2025-06-22 20:07:33.404353 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-22 20:07:33.404363 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-22 20:07:33.404375 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-22 20:07:33.404386 | orchestrator | 2025-06-22 20:07:33.404396 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-22 20:07:33.404407 | orchestrator | Sunday 22 June 2025 20:05:07 +0000 (0:00:01.150) 0:00:01.696 *********** 2025-06-22 20:07:33.404418 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-22 20:07:33.404428 | orchestrator | 2025-06-22 20:07:33.404440 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-22 20:07:33.404451 | orchestrator | Sunday 22 June 2025 20:05:08 +0000 (0:00:01.113) 0:00:02.810 *********** 2025-06-22 20:07:33.404462 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.404472 | orchestrator | 2025-06-22 20:07:33.404483 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-22 20:07:33.404494 | orchestrator | Sunday 22 June 2025 20:05:09 +0000 (0:00:00.934) 0:00:03.745 *********** 2025-06-22 20:07:33.404504 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.404515 | orchestrator | 2025-06-22 20:07:33.404526 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-22 20:07:33.404536 | orchestrator | Sunday 22 June 2025 20:05:09 +0000 (0:00:00.754) 0:00:04.499 *********** 2025-06-22 20:07:33.404547 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-22 20:07:33.404557 | orchestrator | ok: [testbed-manager] 2025-06-22 20:07:33.404568 | orchestrator | 2025-06-22 20:07:33.404579 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-22 20:07:33.404590 | orchestrator | Sunday 22 June 2025 20:05:46 +0000 (0:00:36.410) 0:00:40.910 *********** 2025-06-22 20:07:33.404600 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-22 20:07:33.404612 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-22 20:07:33.404636 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-22 20:07:33.404648 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-22 20:07:33.404659 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-22 20:07:33.404669 | orchestrator | 2025-06-22 20:07:33.404701 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-22 20:07:33.404712 | orchestrator | Sunday 22 June 2025 20:05:49 +0000 (0:00:03.188) 0:00:44.098 *********** 2025-06-22 20:07:33.404723 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-22 20:07:33.404734 | orchestrator | 2025-06-22 20:07:33.404744 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-22 20:07:33.404755 | orchestrator | Sunday 22 June 2025 20:05:49 +0000 (0:00:00.403) 0:00:44.502 *********** 2025-06-22 20:07:33.404766 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:07:33.404777 | orchestrator | 2025-06-22 20:07:33.404787 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-22 20:07:33.404798 | orchestrator | Sunday 22 June 2025 20:05:49 +0000 (0:00:00.125) 0:00:44.627 *********** 2025-06-22 20:07:33.404809 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:07:33.404823 | orchestrator | 2025-06-22 20:07:33.404834 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-22 20:07:33.404847 | orchestrator | Sunday 22 June 2025 20:05:50 +0000 (0:00:00.279) 0:00:44.907 *********** 2025-06-22 20:07:33.404859 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.404871 | orchestrator | 2025-06-22 20:07:33.404883 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-22 20:07:33.404895 | orchestrator | Sunday 22 June 2025 20:05:51 +0000 (0:00:01.379) 0:00:46.287 *********** 2025-06-22 20:07:33.404907 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.404919 | orchestrator | 2025-06-22 20:07:33.404932 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-22 20:07:33.404944 | orchestrator | Sunday 22 June 2025 20:05:52 +0000 (0:00:00.604) 0:00:46.891 *********** 2025-06-22 20:07:33.404956 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.404968 | orchestrator | 2025-06-22 20:07:33.404979 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-22 20:07:33.404990 | orchestrator | Sunday 22 June 2025 20:05:52 +0000 (0:00:00.617) 0:00:47.508 *********** 2025-06-22 20:07:33.405001 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-22 20:07:33.405012 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-22 20:07:33.405023 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-22 20:07:33.405033 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-22 20:07:33.405062 | orchestrator | 2025-06-22 20:07:33.405073 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:07:33.405098 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:07:33.405110 | orchestrator | 2025-06-22 20:07:33.405120 | orchestrator | 2025-06-22 20:07:33.405131 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:07:33.405142 | orchestrator | Sunday 22 June 2025 20:05:54 +0000 (0:00:01.449) 0:00:48.958 *********** 2025-06-22 20:07:33.405152 | orchestrator | =============================================================================== 2025-06-22 20:07:33.405163 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.41s 2025-06-22 20:07:33.405174 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.19s 2025-06-22 20:07:33.405185 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.45s 2025-06-22 20:07:33.405195 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.38s 2025-06-22 20:07:33.405206 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.15s 2025-06-22 20:07:33.405216 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.11s 2025-06-22 20:07:33.405227 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.93s 2025-06-22 20:07:33.405237 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.75s 2025-06-22 20:07:33.405248 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.62s 2025-06-22 20:07:33.405266 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.60s 2025-06-22 20:07:33.405276 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.40s 2025-06-22 20:07:33.405287 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.31s 2025-06-22 20:07:33.405297 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.28s 2025-06-22 20:07:33.405308 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-06-22 20:07:33.405319 | orchestrator | 2025-06-22 20:07:33.405330 | orchestrator | 2025-06-22 20:07:33.405340 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-22 20:07:33.405351 | orchestrator | 2025-06-22 20:07:33.405362 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-22 20:07:33.405373 | orchestrator | Sunday 22 June 2025 20:05:58 +0000 (0:00:00.278) 0:00:00.278 *********** 2025-06-22 20:07:33.405383 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.405394 | orchestrator | 2025-06-22 20:07:33.405404 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-22 20:07:33.405415 | orchestrator | Sunday 22 June 2025 20:06:00 +0000 (0:00:01.527) 0:00:01.805 *********** 2025-06-22 20:07:33.405426 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.405436 | orchestrator | 2025-06-22 20:07:33.405447 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-22 20:07:33.405458 | orchestrator | Sunday 22 June 2025 20:06:01 +0000 (0:00:01.017) 0:00:02.822 *********** 2025-06-22 20:07:33.405468 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.405479 | orchestrator | 2025-06-22 20:07:33.405494 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-22 20:07:33.405505 | orchestrator | Sunday 22 June 2025 20:06:02 +0000 (0:00:00.905) 0:00:03.727 *********** 2025-06-22 20:07:33.405516 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.405527 | orchestrator | 2025-06-22 20:07:33.405538 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-22 20:07:33.405548 | orchestrator | Sunday 22 June 2025 20:06:03 +0000 (0:00:01.027) 0:00:04.755 *********** 2025-06-22 20:07:33.405559 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.405569 | orchestrator | 2025-06-22 20:07:33.405580 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-22 20:07:33.405591 | orchestrator | Sunday 22 June 2025 20:06:04 +0000 (0:00:00.966) 0:00:05.721 *********** 2025-06-22 20:07:33.405601 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.405612 | orchestrator | 2025-06-22 20:07:33.405623 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-22 20:07:33.405634 | orchestrator | Sunday 22 June 2025 20:06:05 +0000 (0:00:00.877) 0:00:06.599 *********** 2025-06-22 20:07:33.405644 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.405655 | orchestrator | 2025-06-22 20:07:33.405666 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-22 20:07:33.405677 | orchestrator | Sunday 22 June 2025 20:06:06 +0000 (0:00:01.176) 0:00:07.775 *********** 2025-06-22 20:07:33.405687 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.405698 | orchestrator | 2025-06-22 20:07:33.405708 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-22 20:07:33.405719 | orchestrator | Sunday 22 June 2025 20:06:07 +0000 (0:00:01.003) 0:00:08.779 *********** 2025-06-22 20:07:33.405730 | orchestrator | changed: [testbed-manager] 2025-06-22 20:07:33.405741 | orchestrator | 2025-06-22 20:07:33.405751 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-22 20:07:33.405762 | orchestrator | Sunday 22 June 2025 20:07:06 +0000 (0:00:59.293) 0:01:08.073 *********** 2025-06-22 20:07:33.405773 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:07:33.405783 | orchestrator | 2025-06-22 20:07:33.405794 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-22 20:07:33.405811 | orchestrator | 2025-06-22 20:07:33.405822 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-22 20:07:33.405833 | orchestrator | Sunday 22 June 2025 20:07:06 +0000 (0:00:00.135) 0:01:08.209 *********** 2025-06-22 20:07:33.405843 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:33.405854 | orchestrator | 2025-06-22 20:07:33.405865 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-22 20:07:33.405875 | orchestrator | 2025-06-22 20:07:33.405886 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-22 20:07:33.405897 | orchestrator | Sunday 22 June 2025 20:07:18 +0000 (0:00:11.600) 0:01:19.809 *********** 2025-06-22 20:07:33.405908 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:33.405918 | orchestrator | 2025-06-22 20:07:33.405935 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-22 20:07:33.405946 | orchestrator | 2025-06-22 20:07:33.405957 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-22 20:07:33.405968 | orchestrator | Sunday 22 June 2025 20:07:19 +0000 (0:00:01.226) 0:01:21.035 *********** 2025-06-22 20:07:33.405978 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:33.405989 | orchestrator | 2025-06-22 20:07:33.405999 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:07:33.406010 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 20:07:33.406080 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:07:33.406092 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:07:33.406103 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:07:33.406114 | orchestrator | 2025-06-22 20:07:33.406124 | orchestrator | 2025-06-22 20:07:33.406135 | orchestrator | 2025-06-22 20:07:33.406146 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:07:33.406157 | orchestrator | Sunday 22 June 2025 20:07:30 +0000 (0:00:11.181) 0:01:32.217 *********** 2025-06-22 20:07:33.406167 | orchestrator | =============================================================================== 2025-06-22 20:07:33.406178 | orchestrator | Create admin user ------------------------------------------------------ 59.29s 2025-06-22 20:07:33.406188 | orchestrator | Restart ceph manager service ------------------------------------------- 24.01s 2025-06-22 20:07:33.406199 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.53s 2025-06-22 20:07:33.406209 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.18s 2025-06-22 20:07:33.406220 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.03s 2025-06-22 20:07:33.406230 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.02s 2025-06-22 20:07:33.406241 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.00s 2025-06-22 20:07:33.406252 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.97s 2025-06-22 20:07:33.406262 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.91s 2025-06-22 20:07:33.406273 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.88s 2025-06-22 20:07:33.406284 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-06-22 20:07:33.406626 | orchestrator | 2025-06-22 20:07:33 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:33.408996 | orchestrator | 2025-06-22 20:07:33 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:33.411637 | orchestrator | 2025-06-22 20:07:33 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:33.411964 | orchestrator | 2025-06-22 20:07:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:36.441082 | orchestrator | 2025-06-22 20:07:36 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:36.443373 | orchestrator | 2025-06-22 20:07:36 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:36.444869 | orchestrator | 2025-06-22 20:07:36 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:36.446217 | orchestrator | 2025-06-22 20:07:36 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:36.446249 | orchestrator | 2025-06-22 20:07:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:39.479918 | orchestrator | 2025-06-22 20:07:39 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:39.480705 | orchestrator | 2025-06-22 20:07:39 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:39.482167 | orchestrator | 2025-06-22 20:07:39 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:39.482637 | orchestrator | 2025-06-22 20:07:39 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:39.482802 | orchestrator | 2025-06-22 20:07:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:42.515201 | orchestrator | 2025-06-22 20:07:42 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:42.515299 | orchestrator | 2025-06-22 20:07:42 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state STARTED 2025-06-22 20:07:42.516928 | orchestrator | 2025-06-22 20:07:42 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:42.518630 | orchestrator | 2025-06-22 20:07:42 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:42.518670 | orchestrator | 2025-06-22 20:07:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:45.538375 | orchestrator | 2025-06-22 20:07:45 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:45.539644 | orchestrator | 2025-06-22 20:07:45 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:07:45.540509 | orchestrator | 2025-06-22 20:07:45 | INFO  | Task 8c87464a-1668-492c-9876-e26ce362df90 is in state SUCCESS 2025-06-22 20:07:45.542377 | orchestrator | 2025-06-22 20:07:45.542414 | orchestrator | 2025-06-22 20:07:45.542426 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:07:45.542438 | orchestrator | 2025-06-22 20:07:45.542450 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:07:45.542462 | orchestrator | Sunday 22 June 2025 20:05:45 +0000 (0:00:00.357) 0:00:00.357 *********** 2025-06-22 20:07:45.542811 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:07:45.542830 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:07:45.542842 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:07:45.542852 | orchestrator | 2025-06-22 20:07:45.542943 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:07:45.542965 | orchestrator | Sunday 22 June 2025 20:05:45 +0000 (0:00:00.297) 0:00:00.654 *********** 2025-06-22 20:07:45.542982 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-22 20:07:45.543005 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-22 20:07:45.543026 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-22 20:07:45.543068 | orchestrator | 2025-06-22 20:07:45.543080 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-22 20:07:45.543091 | orchestrator | 2025-06-22 20:07:45.543128 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-22 20:07:45.543140 | orchestrator | Sunday 22 June 2025 20:05:46 +0000 (0:00:00.565) 0:00:01.219 *********** 2025-06-22 20:07:45.543151 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:07:45.543163 | orchestrator | 2025-06-22 20:07:45.543174 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-22 20:07:45.543185 | orchestrator | Sunday 22 June 2025 20:05:46 +0000 (0:00:00.655) 0:00:01.875 *********** 2025-06-22 20:07:45.543197 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-22 20:07:45.543208 | orchestrator | 2025-06-22 20:07:45.543219 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-22 20:07:45.543230 | orchestrator | Sunday 22 June 2025 20:05:50 +0000 (0:00:03.815) 0:00:05.691 *********** 2025-06-22 20:07:45.543241 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-22 20:07:45.543264 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-22 20:07:45.543276 | orchestrator | 2025-06-22 20:07:45.543287 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-22 20:07:45.543298 | orchestrator | Sunday 22 June 2025 20:05:57 +0000 (0:00:07.172) 0:00:12.863 *********** 2025-06-22 20:07:45.543309 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:07:45.543320 | orchestrator | 2025-06-22 20:07:45.543330 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-22 20:07:45.543341 | orchestrator | Sunday 22 June 2025 20:06:01 +0000 (0:00:03.535) 0:00:16.399 *********** 2025-06-22 20:07:45.543352 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:07:45.543363 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-22 20:07:45.543374 | orchestrator | 2025-06-22 20:07:45.543385 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-22 20:07:45.543396 | orchestrator | Sunday 22 June 2025 20:06:05 +0000 (0:00:04.178) 0:00:20.577 *********** 2025-06-22 20:07:45.543407 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:07:45.543417 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-22 20:07:45.543428 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-22 20:07:45.543439 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-22 20:07:45.543450 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-22 20:07:45.543461 | orchestrator | 2025-06-22 20:07:45.543471 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-22 20:07:45.543482 | orchestrator | Sunday 22 June 2025 20:06:22 +0000 (0:00:17.284) 0:00:37.861 *********** 2025-06-22 20:07:45.543493 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-22 20:07:45.543504 | orchestrator | 2025-06-22 20:07:45.543514 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-22 20:07:45.543525 | orchestrator | Sunday 22 June 2025 20:06:27 +0000 (0:00:05.019) 0:00:42.881 *********** 2025-06-22 20:07:45.543540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.543575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.543596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.543610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.543624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.543637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.543664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.543679 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.543692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.543705 | orchestrator | 2025-06-22 20:07:45.543719 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-22 20:07:45.543733 | orchestrator | Sunday 22 June 2025 20:06:29 +0000 (0:00:01.811) 0:00:44.693 *********** 2025-06-22 20:07:45.543744 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-22 20:07:45.543755 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-22 20:07:45.543770 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-22 20:07:45.543781 | orchestrator | 2025-06-22 20:07:45.543792 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-22 20:07:45.543803 | orchestrator | Sunday 22 June 2025 20:06:30 +0000 (0:00:01.024) 0:00:45.718 *********** 2025-06-22 20:07:45.543814 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:45.543825 | orchestrator | 2025-06-22 20:07:45.543835 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-22 20:07:45.543846 | orchestrator | Sunday 22 June 2025 20:06:30 +0000 (0:00:00.114) 0:00:45.832 *********** 2025-06-22 20:07:45.543857 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:45.543868 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:45.543878 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:45.543889 | orchestrator | 2025-06-22 20:07:45.543900 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-22 20:07:45.543911 | orchestrator | Sunday 22 June 2025 20:06:31 +0000 (0:00:00.397) 0:00:46.230 *********** 2025-06-22 20:07:45.543922 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:07:45.543933 | orchestrator | 2025-06-22 20:07:45.543944 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-22 20:07:45.543955 | orchestrator | Sunday 22 June 2025 20:06:31 +0000 (0:00:00.487) 0:00:46.718 *********** 2025-06-22 20:07:45.543967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.543992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.544005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.544021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544123 | orchestrator | 2025-06-22 20:07:45.544134 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-22 20:07:45.544146 | orchestrator | Sunday 22 June 2025 20:06:35 +0000 (0:00:03.554) 0:00:50.272 *********** 2025-06-22 20:07:45.544162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:45.544173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.544191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.544203 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:45.544220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:45.544232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.544244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.544255 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:45.544271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:45.544289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.544300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.544312 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:45.544323 | orchestrator | 2025-06-22 20:07:45.544334 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-22 20:07:45.544346 | orchestrator | Sunday 22 June 2025 20:06:35 +0000 (0:00:00.771) 0:00:51.044 *********** 2025-06-22 20:07:45.544364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:45.544376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.544397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.544409 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:45.544420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:45.544438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.544450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.544461 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:45.544480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:45.544491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.544507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.544524 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:45.544536 | orchestrator | 2025-06-22 20:07:45.544547 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-22 20:07:45.544558 | orchestrator | Sunday 22 June 2025 20:06:37 +0000 (0:00:01.073) 0:00:52.118 *********** 2025-06-22 20:07:45.544570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.544587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.544599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.544615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544696 | orchestrator | 2025-06-22 20:07:45.544707 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-22 20:07:45.544719 | orchestrator | Sunday 22 June 2025 20:06:41 +0000 (0:00:04.191) 0:00:56.309 *********** 2025-06-22 20:07:45.544730 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:45.544741 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:45.544752 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:45.544763 | orchestrator | 2025-06-22 20:07:45.544774 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-22 20:07:45.544785 | orchestrator | Sunday 22 June 2025 20:06:43 +0000 (0:00:02.321) 0:00:58.630 *********** 2025-06-22 20:07:45.544796 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:07:45.544807 | orchestrator | 2025-06-22 20:07:45.544823 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-22 20:07:45.544834 | orchestrator | Sunday 22 June 2025 20:06:45 +0000 (0:00:01.925) 0:01:00.556 *********** 2025-06-22 20:07:45.544845 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:45.544856 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:45.544867 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:45.544878 | orchestrator | 2025-06-22 20:07:45.544889 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-22 20:07:45.544900 | orchestrator | Sunday 22 June 2025 20:06:46 +0000 (0:00:00.626) 0:01:01.182 *********** 2025-06-22 20:07:45.544916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.544928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.544946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.544957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.544991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.545002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.545014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.545026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.545107 | orchestrator | 2025-06-22 20:07:45.545128 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-22 20:07:45.545147 | orchestrator | Sunday 22 June 2025 20:06:53 +0000 (0:00:07.657) 0:01:08.840 *********** 2025-06-22 20:07:45.545176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:45.545214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:45.545234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.545253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.545271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.545289 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:45.545308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.545319 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:45.545336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:45.545350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.545361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:45.545371 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:45.545381 | orchestrator | 2025-06-22 20:07:45.545391 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-22 20:07:45.545401 | orchestrator | Sunday 22 June 2025 20:06:55 +0000 (0:00:01.695) 0:01:10.535 *********** 2025-06-22 20:07:45.545411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.545427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.545443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:45.545460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.545471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.545481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.545491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.545508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.545524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:45.545534 | orchestrator | 2025-06-22 20:07:45.545544 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-22 20:07:45.545554 | orchestrator | Sunday 22 June 2025 20:06:59 +0000 (0:00:03.697) 0:01:14.233 *********** 2025-06-22 20:07:45.545564 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:45.545574 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:45.545584 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:45.545593 | orchestrator | 2025-06-22 20:07:45.545603 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-22 20:07:45.545613 | orchestrator | Sunday 22 June 2025 20:06:59 +0000 (0:00:00.300) 0:01:14.534 *********** 2025-06-22 20:07:45.545622 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:45.545632 | orchestrator | 2025-06-22 20:07:45.545642 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-22 20:07:45.545651 | orchestrator | Sunday 22 June 2025 20:07:01 +0000 (0:00:02.370) 0:01:16.904 *********** 2025-06-22 20:07:45.545665 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:45.545674 | orchestrator | 2025-06-22 20:07:45.545684 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-22 20:07:45.545694 | orchestrator | Sunday 22 June 2025 20:07:04 +0000 (0:00:02.229) 0:01:19.134 *********** 2025-06-22 20:07:45.545704 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:45.545713 | orchestrator | 2025-06-22 20:07:45.545723 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-22 20:07:45.545733 | orchestrator | Sunday 22 June 2025 20:07:15 +0000 (0:00:11.764) 0:01:30.898 *********** 2025-06-22 20:07:45.545742 | orchestrator | 2025-06-22 20:07:45.545752 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-22 20:07:45.545762 | orchestrator | Sunday 22 June 2025 20:07:16 +0000 (0:00:00.152) 0:01:31.051 *********** 2025-06-22 20:07:45.545771 | orchestrator | 2025-06-22 20:07:45.545781 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-22 20:07:45.545790 | orchestrator | Sunday 22 June 2025 20:07:16 +0000 (0:00:00.151) 0:01:31.203 *********** 2025-06-22 20:07:45.545800 | orchestrator | 2025-06-22 20:07:45.545809 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-22 20:07:45.545819 | orchestrator | Sunday 22 June 2025 20:07:16 +0000 (0:00:00.159) 0:01:31.362 *********** 2025-06-22 20:07:45.545829 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:45.545838 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:45.545848 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:45.545858 | orchestrator | 2025-06-22 20:07:45.545867 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-22 20:07:45.545877 | orchestrator | Sunday 22 June 2025 20:07:29 +0000 (0:00:12.994) 0:01:44.357 *********** 2025-06-22 20:07:45.545887 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:45.545896 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:45.545911 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:45.545921 | orchestrator | 2025-06-22 20:07:45.545930 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-22 20:07:45.545940 | orchestrator | Sunday 22 June 2025 20:07:34 +0000 (0:00:05.418) 0:01:49.775 *********** 2025-06-22 20:07:45.545950 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:45.545959 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:45.545968 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:45.545978 | orchestrator | 2025-06-22 20:07:45.545988 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:07:45.545999 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:07:45.546009 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:07:45.546076 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:07:45.546090 | orchestrator | 2025-06-22 20:07:45.546100 | orchestrator | 2025-06-22 20:07:45.546109 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:07:45.546119 | orchestrator | Sunday 22 June 2025 20:07:42 +0000 (0:00:07.617) 0:01:57.393 *********** 2025-06-22 20:07:45.546129 | orchestrator | =============================================================================== 2025-06-22 20:07:45.546139 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.28s 2025-06-22 20:07:45.546155 | orchestrator | barbican : Restart barbican-api container ------------------------------ 12.99s 2025-06-22 20:07:45.546165 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.76s 2025-06-22 20:07:45.546174 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 7.66s 2025-06-22 20:07:45.546184 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.62s 2025-06-22 20:07:45.546194 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.17s 2025-06-22 20:07:45.546204 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 5.42s 2025-06-22 20:07:45.546213 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 5.02s 2025-06-22 20:07:45.546223 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.19s 2025-06-22 20:07:45.546232 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.18s 2025-06-22 20:07:45.546242 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.82s 2025-06-22 20:07:45.546252 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.70s 2025-06-22 20:07:45.546262 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.55s 2025-06-22 20:07:45.546271 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.54s 2025-06-22 20:07:45.546281 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.37s 2025-06-22 20:07:45.546291 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.32s 2025-06-22 20:07:45.546300 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.23s 2025-06-22 20:07:45.546310 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.93s 2025-06-22 20:07:45.546320 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.81s 2025-06-22 20:07:45.546329 | orchestrator | barbican : Copying over existing policy file ---------------------------- 1.70s 2025-06-22 20:07:45.546339 | orchestrator | 2025-06-22 20:07:45 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:45.546354 | orchestrator | 2025-06-22 20:07:45 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:45.546370 | orchestrator | 2025-06-22 20:07:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:48.561261 | orchestrator | 2025-06-22 20:07:48 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:48.563189 | orchestrator | 2025-06-22 20:07:48 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:07:48.563247 | orchestrator | 2025-06-22 20:07:48 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:48.563266 | orchestrator | 2025-06-22 20:07:48 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:48.563285 | orchestrator | 2025-06-22 20:07:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:51.596862 | orchestrator | 2025-06-22 20:07:51 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:51.597140 | orchestrator | 2025-06-22 20:07:51 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:07:51.603403 | orchestrator | 2025-06-22 20:07:51 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:51.603829 | orchestrator | 2025-06-22 20:07:51 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:51.603854 | orchestrator | 2025-06-22 20:07:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:54.643642 | orchestrator | 2025-06-22 20:07:54 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:54.646408 | orchestrator | 2025-06-22 20:07:54 | INFO  | Task e9cb4b10-5b7f-43f6-b372-21ddddda2a5d is in state STARTED 2025-06-22 20:07:54.649084 | orchestrator | 2025-06-22 20:07:54 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:07:54.651651 | orchestrator | 2025-06-22 20:07:54 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:54.653440 | orchestrator | 2025-06-22 20:07:54 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:54.653738 | orchestrator | 2025-06-22 20:07:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:57.685261 | orchestrator | 2025-06-22 20:07:57 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:07:57.685926 | orchestrator | 2025-06-22 20:07:57 | INFO  | Task e9cb4b10-5b7f-43f6-b372-21ddddda2a5d is in state STARTED 2025-06-22 20:07:57.687744 | orchestrator | 2025-06-22 20:07:57 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:07:57.688609 | orchestrator | 2025-06-22 20:07:57 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:07:57.689510 | orchestrator | 2025-06-22 20:07:57 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:07:57.689624 | orchestrator | 2025-06-22 20:07:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:00.729009 | orchestrator | 2025-06-22 20:08:00 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:00.729447 | orchestrator | 2025-06-22 20:08:00 | INFO  | Task e9cb4b10-5b7f-43f6-b372-21ddddda2a5d is in state STARTED 2025-06-22 20:08:00.730174 | orchestrator | 2025-06-22 20:08:00 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:00.731795 | orchestrator | 2025-06-22 20:08:00 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:00.732427 | orchestrator | 2025-06-22 20:08:00 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:00.732546 | orchestrator | 2025-06-22 20:08:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:03.756389 | orchestrator | 2025-06-22 20:08:03 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:03.757176 | orchestrator | 2025-06-22 20:08:03 | INFO  | Task e9cb4b10-5b7f-43f6-b372-21ddddda2a5d is in state STARTED 2025-06-22 20:08:03.757626 | orchestrator | 2025-06-22 20:08:03 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:03.758390 | orchestrator | 2025-06-22 20:08:03 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:03.759521 | orchestrator | 2025-06-22 20:08:03 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:03.759757 | orchestrator | 2025-06-22 20:08:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:06.793504 | orchestrator | 2025-06-22 20:08:06 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:06.794161 | orchestrator | 2025-06-22 20:08:06 | INFO  | Task e9cb4b10-5b7f-43f6-b372-21ddddda2a5d is in state STARTED 2025-06-22 20:08:06.794748 | orchestrator | 2025-06-22 20:08:06 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:06.795422 | orchestrator | 2025-06-22 20:08:06 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:06.796223 | orchestrator | 2025-06-22 20:08:06 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:06.796366 | orchestrator | 2025-06-22 20:08:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:09.828332 | orchestrator | 2025-06-22 20:08:09 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:09.829232 | orchestrator | 2025-06-22 20:08:09 | INFO  | Task e9cb4b10-5b7f-43f6-b372-21ddddda2a5d is in state SUCCESS 2025-06-22 20:08:09.832215 | orchestrator | 2025-06-22 20:08:09 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:09.834415 | orchestrator | 2025-06-22 20:08:09 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:09.836171 | orchestrator | 2025-06-22 20:08:09 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:09.836205 | orchestrator | 2025-06-22 20:08:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:12.881276 | orchestrator | 2025-06-22 20:08:12 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:12.883333 | orchestrator | 2025-06-22 20:08:12 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:12.885476 | orchestrator | 2025-06-22 20:08:12 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:12.887508 | orchestrator | 2025-06-22 20:08:12 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:12.887712 | orchestrator | 2025-06-22 20:08:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:15.930107 | orchestrator | 2025-06-22 20:08:15 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:15.931307 | orchestrator | 2025-06-22 20:08:15 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:15.933143 | orchestrator | 2025-06-22 20:08:15 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:15.935304 | orchestrator | 2025-06-22 20:08:15 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:15.935330 | orchestrator | 2025-06-22 20:08:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:18.963214 | orchestrator | 2025-06-22 20:08:18 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:18.963406 | orchestrator | 2025-06-22 20:08:18 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:18.964319 | orchestrator | 2025-06-22 20:08:18 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:18.964921 | orchestrator | 2025-06-22 20:08:18 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:18.965465 | orchestrator | 2025-06-22 20:08:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:22.006579 | orchestrator | 2025-06-22 20:08:22 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:22.010014 | orchestrator | 2025-06-22 20:08:22 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:22.011738 | orchestrator | 2025-06-22 20:08:22 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:22.014224 | orchestrator | 2025-06-22 20:08:22 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:22.014276 | orchestrator | 2025-06-22 20:08:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:25.062923 | orchestrator | 2025-06-22 20:08:25 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:25.064993 | orchestrator | 2025-06-22 20:08:25 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:25.067213 | orchestrator | 2025-06-22 20:08:25 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:25.069195 | orchestrator | 2025-06-22 20:08:25 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:25.069238 | orchestrator | 2025-06-22 20:08:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:28.114472 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:28.115274 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:28.116386 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:28.117582 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:28.117606 | orchestrator | 2025-06-22 20:08:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:31.162312 | orchestrator | 2025-06-22 20:08:31 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:31.164720 | orchestrator | 2025-06-22 20:08:31 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:31.166888 | orchestrator | 2025-06-22 20:08:31 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:31.168342 | orchestrator | 2025-06-22 20:08:31 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:31.168547 | orchestrator | 2025-06-22 20:08:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:34.216507 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:34.218811 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:34.221495 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:34.224132 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:34.224184 | orchestrator | 2025-06-22 20:08:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:37.268999 | orchestrator | 2025-06-22 20:08:37 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:37.271526 | orchestrator | 2025-06-22 20:08:37 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:37.273740 | orchestrator | 2025-06-22 20:08:37 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:37.275859 | orchestrator | 2025-06-22 20:08:37 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:37.275945 | orchestrator | 2025-06-22 20:08:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:40.315642 | orchestrator | 2025-06-22 20:08:40 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:40.317668 | orchestrator | 2025-06-22 20:08:40 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:40.319852 | orchestrator | 2025-06-22 20:08:40 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:40.322199 | orchestrator | 2025-06-22 20:08:40 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state STARTED 2025-06-22 20:08:40.322242 | orchestrator | 2025-06-22 20:08:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:43.369223 | orchestrator | 2025-06-22 20:08:43 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:43.371368 | orchestrator | 2025-06-22 20:08:43 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:43.373136 | orchestrator | 2025-06-22 20:08:43 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:08:43.376120 | orchestrator | 2025-06-22 20:08:43 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:43.381826 | orchestrator | 2025-06-22 20:08:43 | INFO  | Task 6902e14f-7826-4b65-9d88-c8a433247651 is in state SUCCESS 2025-06-22 20:08:43.383330 | orchestrator | 2025-06-22 20:08:43.383414 | orchestrator | None 2025-06-22 20:08:43.383426 | orchestrator | 2025-06-22 20:08:43.383437 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:08:43.383449 | orchestrator | 2025-06-22 20:08:43.383459 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:08:43.383469 | orchestrator | Sunday 22 June 2025 20:05:45 +0000 (0:00:00.331) 0:00:00.331 *********** 2025-06-22 20:08:43.383479 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:08:43.383490 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:08:43.383560 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:08:43.383570 | orchestrator | 2025-06-22 20:08:43.383639 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:08:43.383686 | orchestrator | Sunday 22 June 2025 20:05:45 +0000 (0:00:00.407) 0:00:00.739 *********** 2025-06-22 20:08:43.383699 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-22 20:08:43.383709 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-22 20:08:43.383719 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-22 20:08:43.383729 | orchestrator | 2025-06-22 20:08:43.383738 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-22 20:08:43.383748 | orchestrator | 2025-06-22 20:08:43.383757 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 20:08:43.383767 | orchestrator | Sunday 22 June 2025 20:05:46 +0000 (0:00:00.536) 0:00:01.276 *********** 2025-06-22 20:08:43.383777 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:08:43.383807 | orchestrator | 2025-06-22 20:08:43.383817 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-22 20:08:43.383827 | orchestrator | Sunday 22 June 2025 20:05:46 +0000 (0:00:00.614) 0:00:01.890 *********** 2025-06-22 20:08:43.383837 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-22 20:08:43.383846 | orchestrator | 2025-06-22 20:08:43.383856 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-22 20:08:43.383866 | orchestrator | Sunday 22 June 2025 20:05:50 +0000 (0:00:03.397) 0:00:05.287 *********** 2025-06-22 20:08:43.383876 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-22 20:08:43.383885 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-22 20:08:43.383895 | orchestrator | 2025-06-22 20:08:43.383905 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-22 20:08:43.383949 | orchestrator | Sunday 22 June 2025 20:05:57 +0000 (0:00:07.271) 0:00:12.559 *********** 2025-06-22 20:08:43.383960 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-22 20:08:43.383970 | orchestrator | 2025-06-22 20:08:43.383979 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-22 20:08:43.383989 | orchestrator | Sunday 22 June 2025 20:06:01 +0000 (0:00:03.700) 0:00:16.259 *********** 2025-06-22 20:08:43.383999 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:08:43.384008 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-22 20:08:43.384018 | orchestrator | 2025-06-22 20:08:43.384028 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-22 20:08:43.384069 | orchestrator | Sunday 22 June 2025 20:06:05 +0000 (0:00:04.167) 0:00:20.427 *********** 2025-06-22 20:08:43.384112 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:08:43.384122 | orchestrator | 2025-06-22 20:08:43.384132 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-22 20:08:43.384162 | orchestrator | Sunday 22 June 2025 20:06:08 +0000 (0:00:03.452) 0:00:23.880 *********** 2025-06-22 20:08:43.384172 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-22 20:08:43.384182 | orchestrator | 2025-06-22 20:08:43.384192 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-22 20:08:43.384202 | orchestrator | Sunday 22 June 2025 20:06:13 +0000 (0:00:04.097) 0:00:27.977 *********** 2025-06-22 20:08:43.384215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.384270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.384291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.384303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384418 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384496 | orchestrator | 2025-06-22 20:08:43.384506 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-22 20:08:43.384516 | orchestrator | Sunday 22 June 2025 20:06:16 +0000 (0:00:03.110) 0:00:31.088 *********** 2025-06-22 20:08:43.384526 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:43.384536 | orchestrator | 2025-06-22 20:08:43.384546 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-22 20:08:43.384555 | orchestrator | Sunday 22 June 2025 20:06:16 +0000 (0:00:00.134) 0:00:31.223 *********** 2025-06-22 20:08:43.384565 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:43.384574 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:43.384584 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:43.384594 | orchestrator | 2025-06-22 20:08:43.384603 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 20:08:43.384613 | orchestrator | Sunday 22 June 2025 20:06:16 +0000 (0:00:00.311) 0:00:31.535 *********** 2025-06-22 20:08:43.384623 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:08:43.384633 | orchestrator | 2025-06-22 20:08:43.384642 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-22 20:08:43.384652 | orchestrator | Sunday 22 June 2025 20:06:17 +0000 (0:00:00.829) 0:00:32.364 *********** 2025-06-22 20:08:43.384662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.384688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.384699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.384710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.384892 | orchestrator | 2025-06-22 20:08:43.384903 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-22 20:08:43.384913 | orchestrator | Sunday 22 June 2025 20:06:23 +0000 (0:00:05.991) 0:00:38.355 *********** 2025-06-22 20:08:43.384923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.384944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:43.384964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.384974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.384984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.384995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385005 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:43.385020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.385031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:43.385428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385477 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:43.385494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.385504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:43.385521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385574 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:43.385584 | orchestrator | 2025-06-22 20:08:43.385595 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-22 20:08:43.385604 | orchestrator | Sunday 22 June 2025 20:06:24 +0000 (0:00:01.332) 0:00:39.688 *********** 2025-06-22 20:08:43.385614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.385625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:43.385640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385690 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:43.385700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.385917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:43.385955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.385993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.386010 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:43.386107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.386119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:43.386130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.386179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.386192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.386203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.386220 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:43.386230 | orchestrator | 2025-06-22 20:08:43.386240 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-22 20:08:43.386250 | orchestrator | Sunday 22 June 2025 20:06:25 +0000 (0:00:01.029) 0:00:40.717 *********** 2025-06-22 20:08:43.386260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.386271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.386306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.386322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386532 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386545 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386563 | orchestrator | 2025-06-22 20:08:43.386574 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-22 20:08:43.386585 | orchestrator | Sunday 22 June 2025 20:06:32 +0000 (0:00:06.327) 0:00:47.045 *********** 2025-06-22 20:08:43.386596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.386609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.386621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.386642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386753 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.386882 | orchestrator | 2025-06-22 20:08:43.386893 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-22 20:08:43.386904 | orchestrator | Sunday 22 June 2025 20:06:53 +0000 (0:00:20.932) 0:01:07.978 *********** 2025-06-22 20:08:43.386914 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-22 20:08:43.386924 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-22 20:08:43.386933 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-22 20:08:43.386943 | orchestrator | 2025-06-22 20:08:43.386953 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-22 20:08:43.386962 | orchestrator | Sunday 22 June 2025 20:06:58 +0000 (0:00:05.811) 0:01:13.789 *********** 2025-06-22 20:08:43.386972 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-22 20:08:43.386982 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-22 20:08:43.386991 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-22 20:08:43.387001 | orchestrator | 2025-06-22 20:08:43.387010 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-22 20:08:43.387020 | orchestrator | Sunday 22 June 2025 20:07:01 +0000 (0:00:02.999) 0:01:16.788 *********** 2025-06-22 20:08:43.387030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.387090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.387110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.387132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387319 | orchestrator | 2025-06-22 20:08:43.387329 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-22 20:08:43.387339 | orchestrator | Sunday 22 June 2025 20:07:04 +0000 (0:00:02.742) 0:01:19.530 *********** 2025-06-22 20:08:43.387349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.387360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.387370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.387398 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387581 | orchestrator | 2025-06-22 20:08:43.387591 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 20:08:43.387601 | orchestrator | Sunday 22 June 2025 20:07:07 +0000 (0:00:02.872) 0:01:22.403 *********** 2025-06-22 20:08:43.387611 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:43.387621 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:43.387631 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:43.387640 | orchestrator | 2025-06-22 20:08:43.387650 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-22 20:08:43.387659 | orchestrator | Sunday 22 June 2025 20:07:07 +0000 (0:00:00.367) 0:01:22.771 *********** 2025-06-22 20:08:43.387667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.387676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:43.387689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387731 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:43.387739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.387748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:43.387760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387801 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:43.387810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:43.387818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:43.387831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:43.387871 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:43.387879 | orchestrator | 2025-06-22 20:08:43.387887 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-22 20:08:43.387895 | orchestrator | Sunday 22 June 2025 20:07:08 +0000 (0:00:00.652) 0:01:23.423 *********** 2025-06-22 20:08:43.387904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.387917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.387925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:43.387934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.387998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.388010 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.388022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.388030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.388052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.388067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.388075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.388084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.388097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:43.388106 | orchestrator | 2025-06-22 20:08:43.388114 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 20:08:43.388122 | orchestrator | Sunday 22 June 2025 20:07:13 +0000 (0:00:05.131) 0:01:28.555 *********** 2025-06-22 20:08:43.388130 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:43.388138 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:43.388150 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:43.388158 | orchestrator | 2025-06-22 20:08:43.388166 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-22 20:08:43.388174 | orchestrator | Sunday 22 June 2025 20:07:13 +0000 (0:00:00.348) 0:01:28.903 *********** 2025-06-22 20:08:43.388182 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-22 20:08:43.388190 | orchestrator | 2025-06-22 20:08:43.388198 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-22 20:08:43.388206 | orchestrator | Sunday 22 June 2025 20:07:17 +0000 (0:00:03.118) 0:01:32.022 *********** 2025-06-22 20:08:43.388219 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:08:43.388227 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-22 20:08:43.388235 | orchestrator | 2025-06-22 20:08:43.388243 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-22 20:08:43.388251 | orchestrator | Sunday 22 June 2025 20:07:19 +0000 (0:00:02.326) 0:01:34.349 *********** 2025-06-22 20:08:43.388259 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:43.388267 | orchestrator | 2025-06-22 20:08:43.388275 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-22 20:08:43.388283 | orchestrator | Sunday 22 June 2025 20:07:34 +0000 (0:00:15.039) 0:01:49.388 *********** 2025-06-22 20:08:43.388291 | orchestrator | 2025-06-22 20:08:43.388299 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-22 20:08:43.388306 | orchestrator | Sunday 22 June 2025 20:07:34 +0000 (0:00:00.061) 0:01:49.450 *********** 2025-06-22 20:08:43.388314 | orchestrator | 2025-06-22 20:08:43.388322 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-22 20:08:43.388330 | orchestrator | Sunday 22 June 2025 20:07:34 +0000 (0:00:00.060) 0:01:49.510 *********** 2025-06-22 20:08:43.388338 | orchestrator | 2025-06-22 20:08:43.388346 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-22 20:08:43.388354 | orchestrator | Sunday 22 June 2025 20:07:34 +0000 (0:00:00.062) 0:01:49.573 *********** 2025-06-22 20:08:43.388362 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:43.388370 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:43.388378 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:43.388386 | orchestrator | 2025-06-22 20:08:43.388394 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-22 20:08:43.388402 | orchestrator | Sunday 22 June 2025 20:07:45 +0000 (0:00:10.952) 0:02:00.526 *********** 2025-06-22 20:08:43.388410 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:43.388418 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:43.388426 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:43.388434 | orchestrator | 2025-06-22 20:08:43.388442 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-22 20:08:43.388450 | orchestrator | Sunday 22 June 2025 20:07:57 +0000 (0:00:11.751) 0:02:12.278 *********** 2025-06-22 20:08:43.388458 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:43.388466 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:43.388474 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:43.388482 | orchestrator | 2025-06-22 20:08:43.388490 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-22 20:08:43.388498 | orchestrator | Sunday 22 June 2025 20:08:06 +0000 (0:00:09.223) 0:02:21.501 *********** 2025-06-22 20:08:43.388506 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:43.388514 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:43.388522 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:43.388529 | orchestrator | 2025-06-22 20:08:43.388537 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-22 20:08:43.388545 | orchestrator | Sunday 22 June 2025 20:08:16 +0000 (0:00:10.286) 0:02:31.787 *********** 2025-06-22 20:08:43.388553 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:43.388561 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:43.388569 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:43.388577 | orchestrator | 2025-06-22 20:08:43.388585 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-22 20:08:43.388593 | orchestrator | Sunday 22 June 2025 20:08:22 +0000 (0:00:05.467) 0:02:37.254 *********** 2025-06-22 20:08:43.388601 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:43.388609 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:43.388617 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:43.388625 | orchestrator | 2025-06-22 20:08:43.388633 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-22 20:08:43.388645 | orchestrator | Sunday 22 June 2025 20:08:33 +0000 (0:00:11.383) 0:02:48.638 *********** 2025-06-22 20:08:43.388653 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:43.388661 | orchestrator | 2025-06-22 20:08:43.388669 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:08:43.388677 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:08:43.388686 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:08:43.388694 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:08:43.388702 | orchestrator | 2025-06-22 20:08:43.388710 | orchestrator | 2025-06-22 20:08:43.388721 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:08:43.388730 | orchestrator | Sunday 22 June 2025 20:08:40 +0000 (0:00:07.196) 0:02:55.834 *********** 2025-06-22 20:08:43.388738 | orchestrator | =============================================================================== 2025-06-22 20:08:43.388746 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.93s 2025-06-22 20:08:43.388753 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.04s 2025-06-22 20:08:43.388765 | orchestrator | designate : Restart designate-api container ---------------------------- 11.75s 2025-06-22 20:08:43.388773 | orchestrator | designate : Restart designate-worker container ------------------------- 11.38s 2025-06-22 20:08:43.388781 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.95s 2025-06-22 20:08:43.388789 | orchestrator | designate : Restart designate-producer container ----------------------- 10.29s 2025-06-22 20:08:43.388797 | orchestrator | designate : Restart designate-central container ------------------------- 9.22s 2025-06-22 20:08:43.388805 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.27s 2025-06-22 20:08:43.388813 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.20s 2025-06-22 20:08:43.388821 | orchestrator | designate : Copying over config.json files for services ----------------- 6.33s 2025-06-22 20:08:43.388829 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.99s 2025-06-22 20:08:43.388837 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.81s 2025-06-22 20:08:43.388845 | orchestrator | designate : Restart designate-mdns container ---------------------------- 5.47s 2025-06-22 20:08:43.388853 | orchestrator | designate : Check designate containers ---------------------------------- 5.13s 2025-06-22 20:08:43.388861 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.17s 2025-06-22 20:08:43.388868 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.10s 2025-06-22 20:08:43.388876 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.70s 2025-06-22 20:08:43.388884 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.45s 2025-06-22 20:08:43.388892 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.40s 2025-06-22 20:08:43.388900 | orchestrator | designate : Creating Designate databases -------------------------------- 3.12s 2025-06-22 20:08:43.388908 | orchestrator | 2025-06-22 20:08:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:46.423253 | orchestrator | 2025-06-22 20:08:46 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:46.423933 | orchestrator | 2025-06-22 20:08:46 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:46.425664 | orchestrator | 2025-06-22 20:08:46 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:08:46.426647 | orchestrator | 2025-06-22 20:08:46 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:46.426734 | orchestrator | 2025-06-22 20:08:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:49.469461 | orchestrator | 2025-06-22 20:08:49 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state STARTED 2025-06-22 20:08:49.471530 | orchestrator | 2025-06-22 20:08:49 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:49.473573 | orchestrator | 2025-06-22 20:08:49 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:08:49.476773 | orchestrator | 2025-06-22 20:08:49 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:49.477880 | orchestrator | 2025-06-22 20:08:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:52.523172 | orchestrator | 2025-06-22 20:08:52 | INFO  | Task f515ae4b-d5d0-40d8-8e4e-98be0cae1cb1 is in state SUCCESS 2025-06-22 20:08:52.523432 | orchestrator | 2025-06-22 20:08:52 | INFO  | Task f060f05d-f437-4196-b6cb-aeda5e7aa7f6 is in state STARTED 2025-06-22 20:08:52.524247 | orchestrator | 2025-06-22 20:08:52 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state STARTED 2025-06-22 20:08:52.524796 | orchestrator | 2025-06-22 20:08:52 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:08:52.527272 | orchestrator | 2025-06-22 20:08:52 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:52.527298 | orchestrator | 2025-06-22 20:08:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:55.555613 | orchestrator | 2025-06-22 20:08:55 | INFO  | Task f060f05d-f437-4196-b6cb-aeda5e7aa7f6 is in state STARTED 2025-06-22 20:08:55.555762 | orchestrator | 2025-06-22 20:08:55 | INFO  | Task cf19c4e6-1198-4eaa-9427-85f408c5cfca is in state SUCCESS 2025-06-22 20:08:55.556908 | orchestrator | 2025-06-22 20:08:55.556955 | orchestrator | 2025-06-22 20:08:55.556976 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-22 20:08:55.556996 | orchestrator | 2025-06-22 20:08:55.557015 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-22 20:08:55.557034 | orchestrator | Sunday 22 June 2025 20:05:45 +0000 (0:00:00.127) 0:00:00.127 *********** 2025-06-22 20:08:55.557248 | orchestrator | changed: [localhost] 2025-06-22 20:08:55.557270 | orchestrator | 2025-06-22 20:08:55.557290 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-22 20:08:55.557310 | orchestrator | Sunday 22 June 2025 20:05:46 +0000 (0:00:00.856) 0:00:00.984 *********** 2025-06-22 20:08:55.557327 | orchestrator | 2025-06-22 20:08:55.557338 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 20:08:55.557364 | orchestrator | changed: [localhost] 2025-06-22 20:08:55.557383 | orchestrator | 2025-06-22 20:08:55.557401 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-22 20:08:55.557420 | orchestrator | Sunday 22 June 2025 20:08:32 +0000 (0:02:46.619) 0:02:47.604 *********** 2025-06-22 20:08:55.557441 | orchestrator | changed: [localhost] 2025-06-22 20:08:55.557460 | orchestrator | 2025-06-22 20:08:55.557475 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:08:55.557486 | orchestrator | 2025-06-22 20:08:55.557499 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:08:55.557518 | orchestrator | Sunday 22 June 2025 20:08:48 +0000 (0:00:15.835) 0:03:03.440 *********** 2025-06-22 20:08:55.557536 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:08:55.557555 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:08:55.557573 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:08:55.557592 | orchestrator | 2025-06-22 20:08:55.557612 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:08:55.557631 | orchestrator | Sunday 22 June 2025 20:08:48 +0000 (0:00:00.277) 0:03:03.717 *********** 2025-06-22 20:08:55.557677 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-22 20:08:55.557698 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-22 20:08:55.557718 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-22 20:08:55.557737 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-22 20:08:55.557755 | orchestrator | 2025-06-22 20:08:55.557774 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-22 20:08:55.557794 | orchestrator | skipping: no hosts matched 2025-06-22 20:08:55.557813 | orchestrator | 2025-06-22 20:08:55.557832 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:08:55.557853 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:55.557874 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:55.557896 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:55.557923 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:08:55.557941 | orchestrator | 2025-06-22 20:08:55.557954 | orchestrator | 2025-06-22 20:08:55.557966 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:08:55.557980 | orchestrator | Sunday 22 June 2025 20:08:49 +0000 (0:00:00.490) 0:03:04.207 *********** 2025-06-22 20:08:55.557992 | orchestrator | =============================================================================== 2025-06-22 20:08:55.558004 | orchestrator | Download ironic-agent initramfs --------------------------------------- 166.62s 2025-06-22 20:08:55.558132 | orchestrator | Download ironic-agent kernel ------------------------------------------- 15.84s 2025-06-22 20:08:55.558152 | orchestrator | Ensure the destination directory exists --------------------------------- 0.86s 2025-06-22 20:08:55.558173 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-06-22 20:08:55.558193 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-06-22 20:08:55.558213 | orchestrator | 2025-06-22 20:08:55.558225 | orchestrator | 2025-06-22 20:08:55.558236 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:08:55.558246 | orchestrator | 2025-06-22 20:08:55.558257 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:08:55.558268 | orchestrator | Sunday 22 June 2025 20:07:48 +0000 (0:00:00.302) 0:00:00.302 *********** 2025-06-22 20:08:55.558279 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:08:55.558290 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:08:55.558301 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:08:55.558312 | orchestrator | 2025-06-22 20:08:55.558323 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:08:55.558334 | orchestrator | Sunday 22 June 2025 20:07:48 +0000 (0:00:00.479) 0:00:00.782 *********** 2025-06-22 20:08:55.558345 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-22 20:08:55.558356 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-22 20:08:55.558367 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-22 20:08:55.558377 | orchestrator | 2025-06-22 20:08:55.558388 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-22 20:08:55.558399 | orchestrator | 2025-06-22 20:08:55.558410 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-22 20:08:55.558420 | orchestrator | Sunday 22 June 2025 20:07:49 +0000 (0:00:00.596) 0:00:01.379 *********** 2025-06-22 20:08:55.558431 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:08:55.558455 | orchestrator | 2025-06-22 20:08:55.558467 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-22 20:08:55.558500 | orchestrator | Sunday 22 June 2025 20:07:50 +0000 (0:00:00.774) 0:00:02.153 *********** 2025-06-22 20:08:55.558523 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-22 20:08:55.558539 | orchestrator | 2025-06-22 20:08:55.558551 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-22 20:08:55.558562 | orchestrator | Sunday 22 June 2025 20:07:53 +0000 (0:00:03.596) 0:00:05.751 *********** 2025-06-22 20:08:55.558572 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-22 20:08:55.558583 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-22 20:08:55.558594 | orchestrator | 2025-06-22 20:08:55.558612 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-22 20:08:55.558624 | orchestrator | Sunday 22 June 2025 20:08:00 +0000 (0:00:06.678) 0:00:12.429 *********** 2025-06-22 20:08:55.558635 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:08:55.558645 | orchestrator | 2025-06-22 20:08:55.558655 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-22 20:08:55.558664 | orchestrator | Sunday 22 June 2025 20:08:03 +0000 (0:00:03.531) 0:00:15.960 *********** 2025-06-22 20:08:55.558674 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:08:55.558683 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-22 20:08:55.558693 | orchestrator | 2025-06-22 20:08:55.558702 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-22 20:08:55.558712 | orchestrator | Sunday 22 June 2025 20:08:07 +0000 (0:00:03.975) 0:00:19.936 *********** 2025-06-22 20:08:55.558721 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:08:55.558731 | orchestrator | 2025-06-22 20:08:55.558740 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-22 20:08:55.558750 | orchestrator | Sunday 22 June 2025 20:08:11 +0000 (0:00:03.217) 0:00:23.154 *********** 2025-06-22 20:08:55.558759 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-22 20:08:55.558769 | orchestrator | 2025-06-22 20:08:55.558778 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-22 20:08:55.558795 | orchestrator | Sunday 22 June 2025 20:08:15 +0000 (0:00:04.095) 0:00:27.249 *********** 2025-06-22 20:08:55.558810 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:55.558825 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:55.558842 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:55.558859 | orchestrator | 2025-06-22 20:08:55.558868 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-22 20:08:55.558886 | orchestrator | Sunday 22 June 2025 20:08:15 +0000 (0:00:00.268) 0:00:27.517 *********** 2025-06-22 20:08:55.558906 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.558926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.558954 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.558964 | orchestrator | 2025-06-22 20:08:55.558979 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-22 20:08:55.558989 | orchestrator | Sunday 22 June 2025 20:08:16 +0000 (0:00:00.740) 0:00:28.258 *********** 2025-06-22 20:08:55.558999 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:55.559008 | orchestrator | 2025-06-22 20:08:55.559018 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-22 20:08:55.559028 | orchestrator | Sunday 22 June 2025 20:08:16 +0000 (0:00:00.110) 0:00:28.368 *********** 2025-06-22 20:08:55.559038 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:55.559071 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:55.559081 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:55.559091 | orchestrator | 2025-06-22 20:08:55.559101 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-22 20:08:55.559111 | orchestrator | Sunday 22 June 2025 20:08:16 +0000 (0:00:00.377) 0:00:28.745 *********** 2025-06-22 20:08:55.559120 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:08:55.559130 | orchestrator | 2025-06-22 20:08:55.559140 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-22 20:08:55.559150 | orchestrator | Sunday 22 June 2025 20:08:17 +0000 (0:00:00.443) 0:00:29.189 *********** 2025-06-22 20:08:55.559160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.559171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.559195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.559206 | orchestrator | 2025-06-22 20:08:55.559216 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-22 20:08:55.559225 | orchestrator | Sunday 22 June 2025 20:08:18 +0000 (0:00:01.553) 0:00:30.743 *********** 2025-06-22 20:08:55.559240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:55.559251 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:55.559261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:55.559271 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:55.559291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:55.559308 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:55.559325 | orchestrator | 2025-06-22 20:08:55.559342 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-22 20:08:55.559356 | orchestrator | Sunday 22 June 2025 20:08:19 +0000 (0:00:00.601) 0:00:31.344 *********** 2025-06-22 20:08:55.559372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:55.559383 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:55.559398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:55.559409 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:55.559419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:55.559436 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:55.559445 | orchestrator | 2025-06-22 20:08:55.559455 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-22 20:08:55.559465 | orchestrator | Sunday 22 June 2025 20:08:19 +0000 (0:00:00.615) 0:00:31.959 *********** 2025-06-22 20:08:55.559475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.559486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.559508 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.559519 | orchestrator | 2025-06-22 20:08:55.559528 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-22 20:08:55.559538 | orchestrator | Sunday 22 June 2025 20:08:21 +0000 (0:00:01.338) 0:00:33.298 *********** 2025-06-22 20:08:55.559548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.559565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.559575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.559586 | orchestrator | 2025-06-22 20:08:55.559595 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-22 20:08:55.559605 | orchestrator | Sunday 22 June 2025 20:08:23 +0000 (0:00:02.551) 0:00:35.849 *********** 2025-06-22 20:08:55.559615 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-22 20:08:55.559624 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-22 20:08:55.559634 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-22 20:08:55.559644 | orchestrator | 2025-06-22 20:08:55.559659 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-22 20:08:55.559669 | orchestrator | Sunday 22 June 2025 20:08:25 +0000 (0:00:01.810) 0:00:37.660 *********** 2025-06-22 20:08:55.559679 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:55.559688 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:55.559698 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:55.559707 | orchestrator | 2025-06-22 20:08:55.559717 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-22 20:08:55.559727 | orchestrator | Sunday 22 June 2025 20:08:27 +0000 (0:00:01.506) 0:00:39.166 *********** 2025-06-22 20:08:55.559748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:55.559763 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:55.559773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:55.559784 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:55.559794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:55.559804 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:55.559814 | orchestrator | 2025-06-22 20:08:55.559824 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-22 20:08:55.559834 | orchestrator | Sunday 22 June 2025 20:08:27 +0000 (0:00:00.505) 0:00:39.672 *********** 2025-06-22 20:08:55.559849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.559864 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.559880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:55.559891 | orchestrator | 2025-06-22 20:08:55.559901 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-22 20:08:55.559911 | orchestrator | Sunday 22 June 2025 20:08:29 +0000 (0:00:01.495) 0:00:41.167 *********** 2025-06-22 20:08:55.559924 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:55.559941 | orchestrator | 2025-06-22 20:08:55.559959 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-22 20:08:55.559976 | orchestrator | Sunday 22 June 2025 20:08:31 +0000 (0:00:02.406) 0:00:43.574 *********** 2025-06-22 20:08:55.559991 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:55.560006 | orchestrator | 2025-06-22 20:08:55.560022 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-22 20:08:55.560039 | orchestrator | Sunday 22 June 2025 20:08:33 +0000 (0:00:02.404) 0:00:45.979 *********** 2025-06-22 20:08:55.560073 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:55.560083 | orchestrator | 2025-06-22 20:08:55.560092 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-22 20:08:55.560102 | orchestrator | Sunday 22 June 2025 20:08:48 +0000 (0:00:14.135) 0:01:00.114 *********** 2025-06-22 20:08:55.560112 | orchestrator | 2025-06-22 20:08:55.560121 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-22 20:08:55.560131 | orchestrator | Sunday 22 June 2025 20:08:48 +0000 (0:00:00.060) 0:01:00.175 *********** 2025-06-22 20:08:55.560140 | orchestrator | 2025-06-22 20:08:55.560150 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-22 20:08:55.560160 | orchestrator | Sunday 22 June 2025 20:08:48 +0000 (0:00:00.060) 0:01:00.235 *********** 2025-06-22 20:08:55.560169 | orchestrator | 2025-06-22 20:08:55.560178 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-22 20:08:55.560188 | orchestrator | Sunday 22 June 2025 20:08:48 +0000 (0:00:00.061) 0:01:00.297 *********** 2025-06-22 20:08:55.560198 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:55.560207 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:55.560217 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:55.560226 | orchestrator | 2025-06-22 20:08:55.560236 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:08:55.560246 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:08:55.560256 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:08:55.560265 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:08:55.560283 | orchestrator | 2025-06-22 20:08:55.560293 | orchestrator | 2025-06-22 20:08:55.560302 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:08:55.560312 | orchestrator | Sunday 22 June 2025 20:08:53 +0000 (0:00:05.636) 0:01:05.934 *********** 2025-06-22 20:08:55.560329 | orchestrator | =============================================================================== 2025-06-22 20:08:55.560339 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.14s 2025-06-22 20:08:55.560348 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.68s 2025-06-22 20:08:55.560358 | orchestrator | placement : Restart placement-api container ----------------------------- 5.64s 2025-06-22 20:08:55.560368 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.10s 2025-06-22 20:08:55.560377 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.98s 2025-06-22 20:08:55.560387 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.60s 2025-06-22 20:08:55.560401 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.53s 2025-06-22 20:08:55.560411 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.22s 2025-06-22 20:08:55.560421 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.55s 2025-06-22 20:08:55.560431 | orchestrator | placement : Creating placement databases -------------------------------- 2.41s 2025-06-22 20:08:55.560440 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.40s 2025-06-22 20:08:55.560450 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.81s 2025-06-22 20:08:55.560459 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.55s 2025-06-22 20:08:55.560469 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.51s 2025-06-22 20:08:55.560478 | orchestrator | placement : Check placement containers ---------------------------------- 1.50s 2025-06-22 20:08:55.560488 | orchestrator | placement : Copying over config.json files for services ----------------- 1.34s 2025-06-22 20:08:55.560497 | orchestrator | placement : include_tasks ----------------------------------------------- 0.77s 2025-06-22 20:08:55.560507 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.74s 2025-06-22 20:08:55.560516 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.62s 2025-06-22 20:08:55.560526 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.60s 2025-06-22 20:08:55.560536 | orchestrator | 2025-06-22 20:08:55 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:08:55.560546 | orchestrator | 2025-06-22 20:08:55 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:08:55.560555 | orchestrator | 2025-06-22 20:08:55 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:55.560565 | orchestrator | 2025-06-22 20:08:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:58.575335 | orchestrator | 2025-06-22 20:08:58 | INFO  | Task f060f05d-f437-4196-b6cb-aeda5e7aa7f6 is in state SUCCESS 2025-06-22 20:08:58.575528 | orchestrator | 2025-06-22 20:08:58 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:08:58.575560 | orchestrator | 2025-06-22 20:08:58 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:08:58.575991 | orchestrator | 2025-06-22 20:08:58 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:08:58.576483 | orchestrator | 2025-06-22 20:08:58 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state STARTED 2025-06-22 20:08:58.576581 | orchestrator | 2025-06-22 20:08:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:01.606545 | orchestrator | 2025-06-22 20:09:01 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:01.608684 | orchestrator | 2025-06-22 20:09:01 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:01.609187 | orchestrator | 2025-06-22 20:09:01 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:01.610637 | orchestrator | 2025-06-22 20:09:01 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state STARTED 2025-06-22 20:09:01.610690 | orchestrator | 2025-06-22 20:09:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:04.646889 | orchestrator | 2025-06-22 20:09:04 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:04.649519 | orchestrator | 2025-06-22 20:09:04 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:04.652144 | orchestrator | 2025-06-22 20:09:04 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:04.654557 | orchestrator | 2025-06-22 20:09:04 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state STARTED 2025-06-22 20:09:04.654833 | orchestrator | 2025-06-22 20:09:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:07.694821 | orchestrator | 2025-06-22 20:09:07 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:07.701189 | orchestrator | 2025-06-22 20:09:07 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:07.702666 | orchestrator | 2025-06-22 20:09:07 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:07.705070 | orchestrator | 2025-06-22 20:09:07 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state STARTED 2025-06-22 20:09:07.705234 | orchestrator | 2025-06-22 20:09:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:10.753382 | orchestrator | 2025-06-22 20:09:10 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:10.753479 | orchestrator | 2025-06-22 20:09:10 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:10.753495 | orchestrator | 2025-06-22 20:09:10 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:10.754459 | orchestrator | 2025-06-22 20:09:10 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state STARTED 2025-06-22 20:09:10.755201 | orchestrator | 2025-06-22 20:09:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:13.788174 | orchestrator | 2025-06-22 20:09:13 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:13.788284 | orchestrator | 2025-06-22 20:09:13 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:13.788324 | orchestrator | 2025-06-22 20:09:13 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:13.789035 | orchestrator | 2025-06-22 20:09:13 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state STARTED 2025-06-22 20:09:13.789098 | orchestrator | 2025-06-22 20:09:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:16.820931 | orchestrator | 2025-06-22 20:09:16 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:16.823676 | orchestrator | 2025-06-22 20:09:16 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:16.825419 | orchestrator | 2025-06-22 20:09:16 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:16.826898 | orchestrator | 2025-06-22 20:09:16 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state STARTED 2025-06-22 20:09:16.826954 | orchestrator | 2025-06-22 20:09:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:19.864873 | orchestrator | 2025-06-22 20:09:19 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:19.866711 | orchestrator | 2025-06-22 20:09:19 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:19.867660 | orchestrator | 2025-06-22 20:09:19 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:19.869953 | orchestrator | 2025-06-22 20:09:19 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state STARTED 2025-06-22 20:09:19.870003 | orchestrator | 2025-06-22 20:09:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:22.901664 | orchestrator | 2025-06-22 20:09:22 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:22.901753 | orchestrator | 2025-06-22 20:09:22 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:22.903922 | orchestrator | 2025-06-22 20:09:22 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:22.903954 | orchestrator | 2025-06-22 20:09:22 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state STARTED 2025-06-22 20:09:22.903966 | orchestrator | 2025-06-22 20:09:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:25.926486 | orchestrator | 2025-06-22 20:09:25 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:25.926589 | orchestrator | 2025-06-22 20:09:25 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:25.927231 | orchestrator | 2025-06-22 20:09:25 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:25.930929 | orchestrator | 2025-06-22 20:09:25 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state STARTED 2025-06-22 20:09:25.930965 | orchestrator | 2025-06-22 20:09:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:28.988205 | orchestrator | 2025-06-22 20:09:28 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:28.988291 | orchestrator | 2025-06-22 20:09:28 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:28.988317 | orchestrator | 2025-06-22 20:09:28 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:28.988930 | orchestrator | 2025-06-22 20:09:28 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state STARTED 2025-06-22 20:09:28.990844 | orchestrator | 2025-06-22 20:09:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:32.011451 | orchestrator | 2025-06-22 20:09:32 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:32.013377 | orchestrator | 2025-06-22 20:09:32 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:32.016630 | orchestrator | 2025-06-22 20:09:32 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:32.016799 | orchestrator | 2025-06-22 20:09:32 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state STARTED 2025-06-22 20:09:32.016822 | orchestrator | 2025-06-22 20:09:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:35.052290 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:09:35.052340 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:35.052358 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:35.052363 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:35.052367 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task 345bb9af-8a60-48ac-9ecc-9023d981f6d8 is in state SUCCESS 2025-06-22 20:09:35.052371 | orchestrator | 2025-06-22 20:09:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:38.096498 | orchestrator | 2025-06-22 20:09:38 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:09:38.097139 | orchestrator | 2025-06-22 20:09:38 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:38.098494 | orchestrator | 2025-06-22 20:09:38 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:38.099145 | orchestrator | 2025-06-22 20:09:38 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:38.099320 | orchestrator | 2025-06-22 20:09:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:41.138254 | orchestrator | 2025-06-22 20:09:41 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:09:41.138698 | orchestrator | 2025-06-22 20:09:41 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:41.139512 | orchestrator | 2025-06-22 20:09:41 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:41.140544 | orchestrator | 2025-06-22 20:09:41 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:41.140575 | orchestrator | 2025-06-22 20:09:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:44.173352 | orchestrator | 2025-06-22 20:09:44 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:09:44.176402 | orchestrator | 2025-06-22 20:09:44 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:44.177386 | orchestrator | 2025-06-22 20:09:44 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:44.180299 | orchestrator | 2025-06-22 20:09:44 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:44.181278 | orchestrator | 2025-06-22 20:09:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:47.215382 | orchestrator | 2025-06-22 20:09:47 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:09:47.215549 | orchestrator | 2025-06-22 20:09:47 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:47.217392 | orchestrator | 2025-06-22 20:09:47 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:47.218165 | orchestrator | 2025-06-22 20:09:47 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:47.218405 | orchestrator | 2025-06-22 20:09:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:50.245525 | orchestrator | 2025-06-22 20:09:50 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:09:50.248366 | orchestrator | 2025-06-22 20:09:50 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:50.249684 | orchestrator | 2025-06-22 20:09:50 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:50.251270 | orchestrator | 2025-06-22 20:09:50 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:50.252202 | orchestrator | 2025-06-22 20:09:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:53.300585 | orchestrator | 2025-06-22 20:09:53 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:09:53.303311 | orchestrator | 2025-06-22 20:09:53 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:53.305216 | orchestrator | 2025-06-22 20:09:53 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:53.307782 | orchestrator | 2025-06-22 20:09:53 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:53.308175 | orchestrator | 2025-06-22 20:09:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:56.349088 | orchestrator | 2025-06-22 20:09:56 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:09:56.350818 | orchestrator | 2025-06-22 20:09:56 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:56.353567 | orchestrator | 2025-06-22 20:09:56 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:56.355261 | orchestrator | 2025-06-22 20:09:56 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:56.355555 | orchestrator | 2025-06-22 20:09:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:59.395855 | orchestrator | 2025-06-22 20:09:59 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:09:59.398547 | orchestrator | 2025-06-22 20:09:59 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:09:59.399992 | orchestrator | 2025-06-22 20:09:59 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:09:59.402249 | orchestrator | 2025-06-22 20:09:59 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:09:59.402274 | orchestrator | 2025-06-22 20:09:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:02.434524 | orchestrator | 2025-06-22 20:10:02 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:02.436285 | orchestrator | 2025-06-22 20:10:02 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:02.437955 | orchestrator | 2025-06-22 20:10:02 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:02.439693 | orchestrator | 2025-06-22 20:10:02 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:10:02.439731 | orchestrator | 2025-06-22 20:10:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:05.477871 | orchestrator | 2025-06-22 20:10:05 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:05.478044 | orchestrator | 2025-06-22 20:10:05 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:05.478745 | orchestrator | 2025-06-22 20:10:05 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:05.479391 | orchestrator | 2025-06-22 20:10:05 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:10:05.479403 | orchestrator | 2025-06-22 20:10:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:08.511786 | orchestrator | 2025-06-22 20:10:08 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:08.512551 | orchestrator | 2025-06-22 20:10:08 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:08.513409 | orchestrator | 2025-06-22 20:10:08 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:08.514667 | orchestrator | 2025-06-22 20:10:08 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:10:08.514733 | orchestrator | 2025-06-22 20:10:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:11.539864 | orchestrator | 2025-06-22 20:10:11 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:11.540625 | orchestrator | 2025-06-22 20:10:11 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:11.541167 | orchestrator | 2025-06-22 20:10:11 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:11.541648 | orchestrator | 2025-06-22 20:10:11 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:10:11.541690 | orchestrator | 2025-06-22 20:10:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:14.569542 | orchestrator | 2025-06-22 20:10:14 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:14.570312 | orchestrator | 2025-06-22 20:10:14 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:14.572195 | orchestrator | 2025-06-22 20:10:14 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:14.574124 | orchestrator | 2025-06-22 20:10:14 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:10:14.574248 | orchestrator | 2025-06-22 20:10:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:17.603642 | orchestrator | 2025-06-22 20:10:17 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:17.605723 | orchestrator | 2025-06-22 20:10:17 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:17.606243 | orchestrator | 2025-06-22 20:10:17 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:17.607300 | orchestrator | 2025-06-22 20:10:17 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:10:17.607325 | orchestrator | 2025-06-22 20:10:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:20.635995 | orchestrator | 2025-06-22 20:10:20 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:20.637268 | orchestrator | 2025-06-22 20:10:20 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:20.638547 | orchestrator | 2025-06-22 20:10:20 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:20.639140 | orchestrator | 2025-06-22 20:10:20 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:10:20.639173 | orchestrator | 2025-06-22 20:10:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:23.677035 | orchestrator | 2025-06-22 20:10:23 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:23.678435 | orchestrator | 2025-06-22 20:10:23 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:23.679102 | orchestrator | 2025-06-22 20:10:23 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:23.679991 | orchestrator | 2025-06-22 20:10:23 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state STARTED 2025-06-22 20:10:23.680039 | orchestrator | 2025-06-22 20:10:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:26.717216 | orchestrator | 2025-06-22 20:10:26 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:26.718362 | orchestrator | 2025-06-22 20:10:26 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:26.719123 | orchestrator | 2025-06-22 20:10:26 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:26.720760 | orchestrator | 2025-06-22 20:10:26 | INFO  | Task 7a9114a4-3447-409f-b3d3-734086d76f19 is in state SUCCESS 2025-06-22 20:10:26.722217 | orchestrator | 2025-06-22 20:10:26.722247 | orchestrator | 2025-06-22 20:10:26.722254 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:10:26.722262 | orchestrator | 2025-06-22 20:10:26.722268 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:10:26.722276 | orchestrator | Sunday 22 June 2025 20:08:53 +0000 (0:00:00.157) 0:00:00.157 *********** 2025-06-22 20:10:26.722282 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:26.722290 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:26.722296 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:26.722303 | orchestrator | 2025-06-22 20:10:26.722309 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:10:26.722315 | orchestrator | Sunday 22 June 2025 20:08:53 +0000 (0:00:00.271) 0:00:00.429 *********** 2025-06-22 20:10:26.722322 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-22 20:10:26.722328 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-22 20:10:26.722334 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-22 20:10:26.722341 | orchestrator | 2025-06-22 20:10:26.722347 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-22 20:10:26.722353 | orchestrator | 2025-06-22 20:10:26.722359 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-22 20:10:26.722365 | orchestrator | Sunday 22 June 2025 20:08:54 +0000 (0:00:01.102) 0:00:01.531 *********** 2025-06-22 20:10:26.722372 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:26.722378 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:26.722384 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:26.722390 | orchestrator | 2025-06-22 20:10:26.722396 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:10:26.722403 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:10:26.722411 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:10:26.722430 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:10:26.722436 | orchestrator | 2025-06-22 20:10:26.722442 | orchestrator | 2025-06-22 20:10:26.722449 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:10:26.722455 | orchestrator | Sunday 22 June 2025 20:08:55 +0000 (0:00:00.836) 0:00:02.367 *********** 2025-06-22 20:10:26.722461 | orchestrator | =============================================================================== 2025-06-22 20:10:26.722467 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.10s 2025-06-22 20:10:26.722473 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.84s 2025-06-22 20:10:26.722480 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.27s 2025-06-22 20:10:26.722486 | orchestrator | 2025-06-22 20:10:26.722492 | orchestrator | 2025-06-22 20:10:26.722498 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:10:26.722504 | orchestrator | 2025-06-22 20:10:26.722510 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:10:26.722517 | orchestrator | Sunday 22 June 2025 20:08:59 +0000 (0:00:00.199) 0:00:00.199 *********** 2025-06-22 20:10:26.722523 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:26.722529 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:26.722535 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:26.722555 | orchestrator | ok: [testbed-manager] 2025-06-22 20:10:26.722561 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:10:26.722568 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:10:26.722574 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:10:26.722580 | orchestrator | 2025-06-22 20:10:26.722587 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:10:26.722593 | orchestrator | Sunday 22 June 2025 20:09:00 +0000 (0:00:00.611) 0:00:00.810 *********** 2025-06-22 20:10:26.722599 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-22 20:10:26.722605 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-22 20:10:26.722611 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-22 20:10:26.722618 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-22 20:10:26.722624 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-22 20:10:26.722630 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-22 20:10:26.722636 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-22 20:10:26.722642 | orchestrator | 2025-06-22 20:10:26.722648 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-22 20:10:26.722654 | orchestrator | 2025-06-22 20:10:26.722660 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-22 20:10:26.722666 | orchestrator | Sunday 22 June 2025 20:09:00 +0000 (0:00:00.590) 0:00:01.401 *********** 2025-06-22 20:10:26.722674 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:10:26.722681 | orchestrator | 2025-06-22 20:10:26.722687 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-22 20:10:26.722693 | orchestrator | Sunday 22 June 2025 20:09:02 +0000 (0:00:01.696) 0:00:03.097 *********** 2025-06-22 20:10:26.722699 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-22 20:10:26.722705 | orchestrator | 2025-06-22 20:10:26.722712 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-22 20:10:26.722718 | orchestrator | Sunday 22 June 2025 20:09:05 +0000 (0:00:03.301) 0:00:06.398 *********** 2025-06-22 20:10:26.722724 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-22 20:10:26.722740 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-22 20:10:26.722746 | orchestrator | 2025-06-22 20:10:26.722752 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-22 20:10:26.722758 | orchestrator | Sunday 22 June 2025 20:09:11 +0000 (0:00:05.763) 0:00:12.162 *********** 2025-06-22 20:10:26.722765 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:10:26.722771 | orchestrator | 2025-06-22 20:10:26.722778 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-22 20:10:26.722784 | orchestrator | Sunday 22 June 2025 20:09:14 +0000 (0:00:03.043) 0:00:15.205 *********** 2025-06-22 20:10:26.722850 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:10:26.722859 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-22 20:10:26.722866 | orchestrator | 2025-06-22 20:10:26.722873 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-22 20:10:26.722880 | orchestrator | Sunday 22 June 2025 20:09:17 +0000 (0:00:03.395) 0:00:18.601 *********** 2025-06-22 20:10:26.722887 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:10:26.722894 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-22 20:10:26.722901 | orchestrator | 2025-06-22 20:10:26.722952 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-22 20:10:26.722964 | orchestrator | Sunday 22 June 2025 20:09:24 +0000 (0:00:06.912) 0:00:25.513 *********** 2025-06-22 20:10:26.722982 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-22 20:10:26.722989 | orchestrator | 2025-06-22 20:10:26.722998 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:10:26.723008 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:10:26.723248 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:10:26.723264 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:10:26.723275 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:10:26.723282 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:10:26.723288 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:10:26.723295 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:10:26.723302 | orchestrator | 2025-06-22 20:10:26.723313 | orchestrator | 2025-06-22 20:10:26.723320 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:10:26.723326 | orchestrator | Sunday 22 June 2025 20:09:31 +0000 (0:00:06.999) 0:00:32.513 *********** 2025-06-22 20:10:26.723332 | orchestrator | =============================================================================== 2025-06-22 20:10:26.723339 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 7.00s 2025-06-22 20:10:26.723345 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.91s 2025-06-22 20:10:26.723351 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.76s 2025-06-22 20:10:26.723357 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.40s 2025-06-22 20:10:26.723363 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.30s 2025-06-22 20:10:26.723369 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.04s 2025-06-22 20:10:26.723375 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.70s 2025-06-22 20:10:26.723381 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.61s 2025-06-22 20:10:26.723388 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2025-06-22 20:10:26.723394 | orchestrator | 2025-06-22 20:10:26.723634 | orchestrator | 2025-06-22 20:10:26.723644 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:10:26.723651 | orchestrator | 2025-06-22 20:10:26.723657 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:10:26.723663 | orchestrator | Sunday 22 June 2025 20:05:45 +0000 (0:00:00.261) 0:00:00.261 *********** 2025-06-22 20:10:26.723669 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:26.723676 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:26.723682 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:26.723688 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:10:26.723694 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:10:26.723700 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:10:26.723706 | orchestrator | 2025-06-22 20:10:26.723713 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:10:26.723719 | orchestrator | Sunday 22 June 2025 20:05:46 +0000 (0:00:00.627) 0:00:00.889 *********** 2025-06-22 20:10:26.723725 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-22 20:10:26.723731 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-22 20:10:26.723738 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-22 20:10:26.723752 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-22 20:10:26.723758 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-22 20:10:26.723789 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-22 20:10:26.723796 | orchestrator | 2025-06-22 20:10:26.723802 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-22 20:10:26.723808 | orchestrator | 2025-06-22 20:10:26.723814 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 20:10:26.723821 | orchestrator | Sunday 22 June 2025 20:05:46 +0000 (0:00:00.790) 0:00:01.680 *********** 2025-06-22 20:10:26.723827 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:10:26.723833 | orchestrator | 2025-06-22 20:10:26.723839 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-22 20:10:26.723845 | orchestrator | Sunday 22 June 2025 20:05:47 +0000 (0:00:01.104) 0:00:02.785 *********** 2025-06-22 20:10:26.723852 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:26.723858 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:26.723864 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:26.723870 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:10:26.723876 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:10:26.723882 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:10:26.723888 | orchestrator | 2025-06-22 20:10:26.723894 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-22 20:10:26.723900 | orchestrator | Sunday 22 June 2025 20:05:49 +0000 (0:00:01.125) 0:00:03.910 *********** 2025-06-22 20:10:26.723906 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:26.723912 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:26.723918 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:26.723924 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:10:26.723930 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:10:26.723936 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:10:26.723942 | orchestrator | 2025-06-22 20:10:26.723948 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-22 20:10:26.723954 | orchestrator | Sunday 22 June 2025 20:05:50 +0000 (0:00:00.965) 0:00:04.876 *********** 2025-06-22 20:10:26.723961 | orchestrator | ok: [testbed-node-0] => { 2025-06-22 20:10:26.723967 | orchestrator |  "changed": false, 2025-06-22 20:10:26.723973 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:10:26.723979 | orchestrator | } 2025-06-22 20:10:26.723986 | orchestrator | ok: [testbed-node-1] => { 2025-06-22 20:10:26.723996 | orchestrator |  "changed": false, 2025-06-22 20:10:26.724003 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:10:26.724009 | orchestrator | } 2025-06-22 20:10:26.724015 | orchestrator | ok: [testbed-node-2] => { 2025-06-22 20:10:26.724021 | orchestrator |  "changed": false, 2025-06-22 20:10:26.724027 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:10:26.724033 | orchestrator | } 2025-06-22 20:10:26.724039 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 20:10:26.724045 | orchestrator |  "changed": false, 2025-06-22 20:10:26.724099 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:10:26.724106 | orchestrator | } 2025-06-22 20:10:26.724112 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 20:10:26.724118 | orchestrator |  "changed": false, 2025-06-22 20:10:26.724124 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:10:26.724130 | orchestrator | } 2025-06-22 20:10:26.724137 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 20:10:26.724143 | orchestrator |  "changed": false, 2025-06-22 20:10:26.724149 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:10:26.724155 | orchestrator | } 2025-06-22 20:10:26.724161 | orchestrator | 2025-06-22 20:10:26.724167 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-22 20:10:26.724174 | orchestrator | Sunday 22 June 2025 20:05:50 +0000 (0:00:00.675) 0:00:05.551 *********** 2025-06-22 20:10:26.724186 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.724192 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.724198 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.724204 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.724210 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.724216 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.724222 | orchestrator | 2025-06-22 20:10:26.724229 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-22 20:10:26.724235 | orchestrator | Sunday 22 June 2025 20:05:51 +0000 (0:00:00.550) 0:00:06.101 *********** 2025-06-22 20:10:26.724241 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-22 20:10:26.724247 | orchestrator | 2025-06-22 20:10:26.724253 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-22 20:10:26.724260 | orchestrator | Sunday 22 June 2025 20:05:54 +0000 (0:00:03.667) 0:00:09.769 *********** 2025-06-22 20:10:26.724267 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-22 20:10:26.724275 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-22 20:10:26.724281 | orchestrator | 2025-06-22 20:10:26.724288 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-22 20:10:26.724295 | orchestrator | Sunday 22 June 2025 20:06:02 +0000 (0:00:07.209) 0:00:16.979 *********** 2025-06-22 20:10:26.724302 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:10:26.724309 | orchestrator | 2025-06-22 20:10:26.724316 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-22 20:10:26.724323 | orchestrator | Sunday 22 June 2025 20:06:05 +0000 (0:00:03.666) 0:00:20.645 *********** 2025-06-22 20:10:26.724329 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:10:26.724336 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-22 20:10:26.724343 | orchestrator | 2025-06-22 20:10:26.724350 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-22 20:10:26.724356 | orchestrator | Sunday 22 June 2025 20:06:09 +0000 (0:00:04.150) 0:00:24.796 *********** 2025-06-22 20:10:26.724363 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:10:26.724370 | orchestrator | 2025-06-22 20:10:26.724377 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-22 20:10:26.724384 | orchestrator | Sunday 22 June 2025 20:06:13 +0000 (0:00:03.813) 0:00:28.609 *********** 2025-06-22 20:10:26.724391 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-22 20:10:26.724434 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-22 20:10:26.724443 | orchestrator | 2025-06-22 20:10:26.724450 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 20:10:26.724457 | orchestrator | Sunday 22 June 2025 20:06:22 +0000 (0:00:08.272) 0:00:36.881 *********** 2025-06-22 20:10:26.724464 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.724471 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.724478 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.724484 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.724491 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.724498 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.724504 | orchestrator | 2025-06-22 20:10:26.724511 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-22 20:10:26.724518 | orchestrator | Sunday 22 June 2025 20:06:22 +0000 (0:00:00.594) 0:00:37.476 *********** 2025-06-22 20:10:26.724525 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.724532 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.724538 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.724545 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.724552 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.724559 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.724570 | orchestrator | 2025-06-22 20:10:26.724577 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-22 20:10:26.724584 | orchestrator | Sunday 22 June 2025 20:06:24 +0000 (0:00:02.086) 0:00:39.563 *********** 2025-06-22 20:10:26.724590 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:26.724596 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:26.724602 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:26.724609 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:10:26.724615 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:10:26.724621 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:10:26.724627 | orchestrator | 2025-06-22 20:10:26.724633 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-22 20:10:26.724639 | orchestrator | Sunday 22 June 2025 20:06:25 +0000 (0:00:01.105) 0:00:40.668 *********** 2025-06-22 20:10:26.724645 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.724651 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.724657 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.724664 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.724674 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.724680 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.724686 | orchestrator | 2025-06-22 20:10:26.724693 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-22 20:10:26.724699 | orchestrator | Sunday 22 June 2025 20:06:28 +0000 (0:00:02.220) 0:00:42.889 *********** 2025-06-22 20:10:26.724708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.724717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.724742 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.724756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.724765 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.724773 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.724779 | orchestrator | 2025-06-22 20:10:26.724785 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-22 20:10:26.724792 | orchestrator | Sunday 22 June 2025 20:06:30 +0000 (0:00:02.668) 0:00:45.557 *********** 2025-06-22 20:10:26.724798 | orchestrator | [WARNING]: Skipped 2025-06-22 20:10:26.724805 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-22 20:10:26.724811 | orchestrator | due to this access issue: 2025-06-22 20:10:26.724817 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-22 20:10:26.724823 | orchestrator | a directory 2025-06-22 20:10:26.724829 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:10:26.724836 | orchestrator | 2025-06-22 20:10:26.724842 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 20:10:26.724848 | orchestrator | Sunday 22 June 2025 20:06:31 +0000 (0:00:00.753) 0:00:46.311 *********** 2025-06-22 20:10:26.724854 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:10:26.724861 | orchestrator | 2025-06-22 20:10:26.724867 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-22 20:10:26.724873 | orchestrator | Sunday 22 June 2025 20:06:32 +0000 (0:00:01.142) 0:00:47.453 *********** 2025-06-22 20:10:26.724897 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.724909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.724929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.724940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.724952 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.724982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.724990 | orchestrator | 2025-06-22 20:10:26.724996 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-22 20:10:26.725002 | orchestrator | Sunday 22 June 2025 20:06:36 +0000 (0:00:03.821) 0:00:51.275 *********** 2025-06-22 20:10:26.725009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.725015 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.725025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.725032 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.725038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.725045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725088 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.725095 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.725120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725128 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.725137 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725148 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.725156 | orchestrator | 2025-06-22 20:10:26.725163 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-22 20:10:26.725172 | orchestrator | Sunday 22 June 2025 20:06:39 +0000 (0:00:03.166) 0:00:54.441 *********** 2025-06-22 20:10:26.725179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.725186 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.725192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725203 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.725213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.725220 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.725226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.725232 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.725242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725249 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.725257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725266 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.725283 | orchestrator | 2025-06-22 20:10:26.725290 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-22 20:10:26.725296 | orchestrator | Sunday 22 June 2025 20:06:43 +0000 (0:00:03.447) 0:00:57.889 *********** 2025-06-22 20:10:26.725302 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.725308 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.725314 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.725320 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.725326 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.725332 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.725338 | orchestrator | 2025-06-22 20:10:26.725344 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-22 20:10:26.725350 | orchestrator | Sunday 22 June 2025 20:06:46 +0000 (0:00:03.047) 0:01:00.936 *********** 2025-06-22 20:10:26.725356 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.725362 | orchestrator | 2025-06-22 20:10:26.725368 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-22 20:10:26.725374 | orchestrator | Sunday 22 June 2025 20:06:46 +0000 (0:00:00.091) 0:01:01.028 *********** 2025-06-22 20:10:26.725381 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.725387 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.725393 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.725399 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.725405 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.725411 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.725417 | orchestrator | 2025-06-22 20:10:26.725423 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-22 20:10:26.725429 | orchestrator | Sunday 22 June 2025 20:06:47 +0000 (0:00:00.902) 0:01:01.931 *********** 2025-06-22 20:10:26.725442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725449 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.725455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.725462 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.725471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.725482 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.725488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.725495 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.725501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725507 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.725520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725531 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.725538 | orchestrator | 2025-06-22 20:10:26.725544 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-22 20:10:26.725550 | orchestrator | Sunday 22 June 2025 20:06:50 +0000 (0:00:02.994) 0:01:04.925 *********** 2025-06-22 20:10:26.725560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.725574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.725580 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.725591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.725598 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.725607 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.725619 | orchestrator | 2025-06-22 20:10:26.725625 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-22 20:10:26.725631 | orchestrator | Sunday 22 June 2025 20:06:53 +0000 (0:00:03.391) 0:01:08.317 *********** 2025-06-22 20:10:26.725638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.725644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.725657 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.725663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.725679 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.725686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.725692 | orchestrator | 2025-06-22 20:10:26.725699 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-22 20:10:26.725705 | orchestrator | Sunday 22 June 2025 20:06:59 +0000 (0:00:06.048) 0:01:14.366 *********** 2025-06-22 20:10:26.725711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725718 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.725729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725736 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.725742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725753 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.725762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.725769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.725775 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.725782 | orchestrator | 2025-06-22 20:10:26.725788 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-22 20:10:26.725794 | orchestrator | Sunday 22 June 2025 20:07:02 +0000 (0:00:03.121) 0:01:17.487 *********** 2025-06-22 20:10:26.725801 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.725807 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.725813 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.725819 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:10:26.725829 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:10:26.725836 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:26.725842 | orchestrator | 2025-06-22 20:10:26.725848 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-22 20:10:26.725854 | orchestrator | Sunday 22 June 2025 20:07:05 +0000 (0:00:02.575) 0:01:20.063 *********** 2025-06-22 20:10:26.725861 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725871 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.725880 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725887 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.725893 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.725901 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.725913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.725923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.725934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.725941 | orchestrator | 2025-06-22 20:10:26.725947 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-22 20:10:26.725953 | orchestrator | Sunday 22 June 2025 20:07:08 +0000 (0:00:03.350) 0:01:23.413 *********** 2025-06-22 20:10:26.725959 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.725965 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.725972 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.725978 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.725984 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.725993 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726000 | orchestrator | 2025-06-22 20:10:26.726006 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-22 20:10:26.726012 | orchestrator | Sunday 22 June 2025 20:07:10 +0000 (0:00:01.958) 0:01:25.372 *********** 2025-06-22 20:10:26.726044 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726063 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726069 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.726075 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726081 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.726087 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726093 | orchestrator | 2025-06-22 20:10:26.726099 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-22 20:10:26.726106 | orchestrator | Sunday 22 June 2025 20:07:12 +0000 (0:00:02.077) 0:01:27.450 *********** 2025-06-22 20:10:26.726112 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726118 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726124 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726130 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.726136 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726142 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.726148 | orchestrator | 2025-06-22 20:10:26.726155 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-22 20:10:26.726161 | orchestrator | Sunday 22 June 2025 20:07:14 +0000 (0:00:02.001) 0:01:29.452 *********** 2025-06-22 20:10:26.726167 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726173 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.726179 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.726185 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726191 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726197 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726203 | orchestrator | 2025-06-22 20:10:26.726209 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-22 20:10:26.726215 | orchestrator | Sunday 22 June 2025 20:07:17 +0000 (0:00:02.977) 0:01:32.430 *********** 2025-06-22 20:10:26.726221 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726228 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726238 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726244 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.726250 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.726256 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726262 | orchestrator | 2025-06-22 20:10:26.726269 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-22 20:10:26.726275 | orchestrator | Sunday 22 June 2025 20:07:20 +0000 (0:00:02.847) 0:01:35.278 *********** 2025-06-22 20:10:26.726281 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726287 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726293 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726299 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.726305 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.726311 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726317 | orchestrator | 2025-06-22 20:10:26.726323 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-22 20:10:26.726329 | orchestrator | Sunday 22 June 2025 20:07:22 +0000 (0:00:01.957) 0:01:37.235 *********** 2025-06-22 20:10:26.726336 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:10:26.726342 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726348 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:10:26.726354 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726361 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:10:26.726367 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726377 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:10:26.726386 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.726398 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:10:26.726404 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.726410 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:10:26.726416 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726422 | orchestrator | 2025-06-22 20:10:26.726428 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-22 20:10:26.726435 | orchestrator | Sunday 22 June 2025 20:07:24 +0000 (0:00:01.838) 0:01:39.073 *********** 2025-06-22 20:10:26.726441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.726448 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.726470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.726476 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726483 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.726502 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.726515 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.726525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.726532 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.726538 | orchestrator | 2025-06-22 20:10:26.726544 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-22 20:10:26.726555 | orchestrator | Sunday 22 June 2025 20:07:26 +0000 (0:00:02.030) 0:01:41.104 *********** 2025-06-22 20:10:26.726561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.726567 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.726580 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.726603 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.726618 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.726628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.726639 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.726648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.726659 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726668 | orchestrator | 2025-06-22 20:10:26.726678 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-22 20:10:26.726688 | orchestrator | Sunday 22 June 2025 20:07:28 +0000 (0:00:02.145) 0:01:43.250 *********** 2025-06-22 20:10:26.726698 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726707 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.726714 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.726720 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726726 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726732 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726738 | orchestrator | 2025-06-22 20:10:26.726744 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-22 20:10:26.726750 | orchestrator | Sunday 22 June 2025 20:07:31 +0000 (0:00:02.913) 0:01:46.163 *********** 2025-06-22 20:10:26.726756 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726762 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726768 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726774 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:10:26.726780 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:10:26.726786 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:10:26.726792 | orchestrator | 2025-06-22 20:10:26.726798 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-22 20:10:26.726805 | orchestrator | Sunday 22 June 2025 20:07:34 +0000 (0:00:03.012) 0:01:49.175 *********** 2025-06-22 20:10:26.726811 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726817 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726823 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726829 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.726835 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726841 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.726847 | orchestrator | 2025-06-22 20:10:26.726853 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-22 20:10:26.726863 | orchestrator | Sunday 22 June 2025 20:07:38 +0000 (0:00:04.541) 0:01:53.716 *********** 2025-06-22 20:10:26.726869 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.726875 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726882 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726888 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726894 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.726905 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726911 | orchestrator | 2025-06-22 20:10:26.726917 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-22 20:10:26.726923 | orchestrator | Sunday 22 June 2025 20:07:41 +0000 (0:00:02.132) 0:01:55.848 *********** 2025-06-22 20:10:26.726930 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726936 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.726942 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.726948 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726954 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.726960 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.726966 | orchestrator | 2025-06-22 20:10:26.726972 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-22 20:10:26.726978 | orchestrator | Sunday 22 June 2025 20:07:43 +0000 (0:00:02.093) 0:01:57.942 *********** 2025-06-22 20:10:26.726985 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.726991 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.726997 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.727003 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.727009 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.727015 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.727024 | orchestrator | 2025-06-22 20:10:26.727034 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-22 20:10:26.727040 | orchestrator | Sunday 22 June 2025 20:07:45 +0000 (0:00:02.133) 0:02:00.075 *********** 2025-06-22 20:10:26.727070 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.727081 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.727091 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.727101 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.727111 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.727121 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.727130 | orchestrator | 2025-06-22 20:10:26.727145 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-22 20:10:26.727151 | orchestrator | Sunday 22 June 2025 20:07:48 +0000 (0:00:02.860) 0:02:02.936 *********** 2025-06-22 20:10:26.727157 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.727163 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.727169 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.727176 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.727182 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.727188 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.727194 | orchestrator | 2025-06-22 20:10:26.727200 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-22 20:10:26.727206 | orchestrator | Sunday 22 June 2025 20:07:50 +0000 (0:00:02.229) 0:02:05.165 *********** 2025-06-22 20:10:26.727212 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.727218 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.727224 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.727230 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.727236 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.727242 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.727249 | orchestrator | 2025-06-22 20:10:26.727255 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-22 20:10:26.727261 | orchestrator | Sunday 22 June 2025 20:07:52 +0000 (0:00:02.058) 0:02:07.224 *********** 2025-06-22 20:10:26.727267 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.727273 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.727279 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.727285 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.727291 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.727297 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.727303 | orchestrator | 2025-06-22 20:10:26.727309 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-22 20:10:26.727325 | orchestrator | Sunday 22 June 2025 20:07:54 +0000 (0:00:02.213) 0:02:09.438 *********** 2025-06-22 20:10:26.727331 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:10:26.727338 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.727344 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:10:26.727350 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.727356 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:10:26.727362 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.727368 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:10:26.727375 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.727381 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:10:26.727387 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.727393 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:10:26.727399 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.727405 | orchestrator | 2025-06-22 20:10:26.727415 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-22 20:10:26.727426 | orchestrator | Sunday 22 June 2025 20:07:57 +0000 (0:00:02.489) 0:02:11.928 *********** 2025-06-22 20:10:26.727439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.727446 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.727455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.727462 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.727468 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.727479 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.727486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.727493 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.727500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:10:26.727507 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.727517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:10:26.727524 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.727531 | orchestrator | 2025-06-22 20:10:26.727538 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-22 20:10:26.727544 | orchestrator | Sunday 22 June 2025 20:08:00 +0000 (0:00:03.651) 0:02:15.580 *********** 2025-06-22 20:10:26.727555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.727568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.727575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:10:26.727586 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.727594 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.727605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:10:26.727616 | orchestrator | 2025-06-22 20:10:26.727623 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 20:10:26.727630 | orchestrator | Sunday 22 June 2025 20:08:04 +0000 (0:00:03.851) 0:02:19.431 *********** 2025-06-22 20:10:26.727636 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:26.727643 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:26.727650 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:26.727656 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:10:26.727663 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:10:26.727669 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:10:26.727675 | orchestrator | 2025-06-22 20:10:26.727682 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-22 20:10:26.727689 | orchestrator | Sunday 22 June 2025 20:08:05 +0000 (0:00:00.491) 0:02:19.923 *********** 2025-06-22 20:10:26.727695 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:26.727702 | orchestrator | 2025-06-22 20:10:26.727708 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-22 20:10:26.727715 | orchestrator | Sunday 22 June 2025 20:08:07 +0000 (0:00:02.149) 0:02:22.072 *********** 2025-06-22 20:10:26.727721 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:26.727728 | orchestrator | 2025-06-22 20:10:26.727735 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-22 20:10:26.727741 | orchestrator | Sunday 22 June 2025 20:08:09 +0000 (0:00:02.325) 0:02:24.398 *********** 2025-06-22 20:10:26.727748 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:26.727754 | orchestrator | 2025-06-22 20:10:26.727761 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:10:26.727768 | orchestrator | Sunday 22 June 2025 20:08:51 +0000 (0:00:42.274) 0:03:06.672 *********** 2025-06-22 20:10:26.727774 | orchestrator | 2025-06-22 20:10:26.727781 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:10:26.727787 | orchestrator | Sunday 22 June 2025 20:08:51 +0000 (0:00:00.059) 0:03:06.732 *********** 2025-06-22 20:10:26.727794 | orchestrator | 2025-06-22 20:10:26.727800 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:10:26.727807 | orchestrator | Sunday 22 June 2025 20:08:52 +0000 (0:00:00.176) 0:03:06.909 *********** 2025-06-22 20:10:26.727834 | orchestrator | 2025-06-22 20:10:26.727841 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:10:26.727848 | orchestrator | Sunday 22 June 2025 20:08:52 +0000 (0:00:00.059) 0:03:06.968 *********** 2025-06-22 20:10:26.727854 | orchestrator | 2025-06-22 20:10:26.727861 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:10:26.727868 | orchestrator | Sunday 22 June 2025 20:08:52 +0000 (0:00:00.060) 0:03:07.029 *********** 2025-06-22 20:10:26.727874 | orchestrator | 2025-06-22 20:10:26.727881 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:10:26.727887 | orchestrator | Sunday 22 June 2025 20:08:52 +0000 (0:00:00.059) 0:03:07.089 *********** 2025-06-22 20:10:26.727894 | orchestrator | 2025-06-22 20:10:26.727900 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-22 20:10:26.727907 | orchestrator | Sunday 22 June 2025 20:08:52 +0000 (0:00:00.061) 0:03:07.150 *********** 2025-06-22 20:10:26.727913 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:26.727922 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:10:26.727934 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:10:26.727946 | orchestrator | 2025-06-22 20:10:26.727957 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-22 20:10:26.727974 | orchestrator | Sunday 22 June 2025 20:09:21 +0000 (0:00:29.300) 0:03:36.451 *********** 2025-06-22 20:10:26.727986 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:10:26.727998 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:10:26.728010 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:10:26.728030 | orchestrator | 2025-06-22 20:10:26.728042 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:10:26.728075 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 20:10:26.728089 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-22 20:10:26.728100 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-22 20:10:26.728112 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-22 20:10:26.728124 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-22 20:10:26.728136 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-22 20:10:26.728147 | orchestrator | 2025-06-22 20:10:26.728157 | orchestrator | 2025-06-22 20:10:26.728164 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:10:26.728170 | orchestrator | Sunday 22 June 2025 20:10:24 +0000 (0:01:03.037) 0:04:39.488 *********** 2025-06-22 20:10:26.728177 | orchestrator | =============================================================================== 2025-06-22 20:10:26.728189 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 63.04s 2025-06-22 20:10:26.728195 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 42.27s 2025-06-22 20:10:26.728202 | orchestrator | neutron : Restart neutron-server container ----------------------------- 29.30s 2025-06-22 20:10:26.728208 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.27s 2025-06-22 20:10:26.728215 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.21s 2025-06-22 20:10:26.728221 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.05s 2025-06-22 20:10:26.728228 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 4.54s 2025-06-22 20:10:26.728235 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.15s 2025-06-22 20:10:26.728241 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.85s 2025-06-22 20:10:26.728248 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.82s 2025-06-22 20:10:26.728254 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.81s 2025-06-22 20:10:26.728261 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.67s 2025-06-22 20:10:26.728267 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.67s 2025-06-22 20:10:26.728274 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.65s 2025-06-22 20:10:26.728281 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.45s 2025-06-22 20:10:26.728287 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.39s 2025-06-22 20:10:26.728294 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.35s 2025-06-22 20:10:26.728300 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.17s 2025-06-22 20:10:26.728307 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.12s 2025-06-22 20:10:26.728313 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.05s 2025-06-22 20:10:26.728320 | orchestrator | 2025-06-22 20:10:26 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:10:26.728327 | orchestrator | 2025-06-22 20:10:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:29.747957 | orchestrator | 2025-06-22 20:10:29 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:29.748101 | orchestrator | 2025-06-22 20:10:29 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:29.748663 | orchestrator | 2025-06-22 20:10:29 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:29.751925 | orchestrator | 2025-06-22 20:10:29 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:10:29.751960 | orchestrator | 2025-06-22 20:10:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:32.780935 | orchestrator | 2025-06-22 20:10:32 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:32.781333 | orchestrator | 2025-06-22 20:10:32 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:32.781792 | orchestrator | 2025-06-22 20:10:32 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:32.782381 | orchestrator | 2025-06-22 20:10:32 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:10:32.782403 | orchestrator | 2025-06-22 20:10:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:35.805589 | orchestrator | 2025-06-22 20:10:35 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:35.805678 | orchestrator | 2025-06-22 20:10:35 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:35.806882 | orchestrator | 2025-06-22 20:10:35 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:35.808727 | orchestrator | 2025-06-22 20:10:35 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:10:35.808773 | orchestrator | 2025-06-22 20:10:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:38.862415 | orchestrator | 2025-06-22 20:10:38 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:38.862518 | orchestrator | 2025-06-22 20:10:38 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:38.863074 | orchestrator | 2025-06-22 20:10:38 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:38.864791 | orchestrator | 2025-06-22 20:10:38 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:10:38.864886 | orchestrator | 2025-06-22 20:10:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:41.888160 | orchestrator | 2025-06-22 20:10:41 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:41.888245 | orchestrator | 2025-06-22 20:10:41 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:41.888772 | orchestrator | 2025-06-22 20:10:41 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state STARTED 2025-06-22 20:10:41.889450 | orchestrator | 2025-06-22 20:10:41 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:10:41.889527 | orchestrator | 2025-06-22 20:10:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:44.933732 | orchestrator | 2025-06-22 20:10:44 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:44.934182 | orchestrator | 2025-06-22 20:10:44 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:44.935689 | orchestrator | 2025-06-22 20:10:44 | INFO  | Task 8df50151-c6b3-40ce-9123-d34820383459 is in state SUCCESS 2025-06-22 20:10:44.936981 | orchestrator | 2025-06-22 20:10:44.937043 | orchestrator | 2025-06-22 20:10:44.937307 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:10:44.937371 | orchestrator | 2025-06-22 20:10:44.937387 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:10:44.937399 | orchestrator | Sunday 22 June 2025 20:08:44 +0000 (0:00:00.247) 0:00:00.247 *********** 2025-06-22 20:10:44.937410 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:44.937421 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:44.937432 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:44.937443 | orchestrator | 2025-06-22 20:10:44.937454 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:10:44.937465 | orchestrator | Sunday 22 June 2025 20:08:45 +0000 (0:00:00.336) 0:00:00.583 *********** 2025-06-22 20:10:44.937476 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-22 20:10:44.937487 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-22 20:10:44.937498 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-22 20:10:44.937709 | orchestrator | 2025-06-22 20:10:44.937722 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-22 20:10:44.937733 | orchestrator | 2025-06-22 20:10:44.937744 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-22 20:10:44.937756 | orchestrator | Sunday 22 June 2025 20:08:45 +0000 (0:00:00.466) 0:00:01.050 *********** 2025-06-22 20:10:44.937767 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:10:44.937778 | orchestrator | 2025-06-22 20:10:44.937789 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-22 20:10:44.937800 | orchestrator | Sunday 22 June 2025 20:08:46 +0000 (0:00:00.480) 0:00:01.531 *********** 2025-06-22 20:10:44.937812 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-22 20:10:44.937823 | orchestrator | 2025-06-22 20:10:44.937833 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-22 20:10:44.937844 | orchestrator | Sunday 22 June 2025 20:08:49 +0000 (0:00:03.441) 0:00:04.972 *********** 2025-06-22 20:10:44.937855 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-22 20:10:44.937866 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-22 20:10:44.937877 | orchestrator | 2025-06-22 20:10:44.937888 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-22 20:10:44.937899 | orchestrator | Sunday 22 June 2025 20:08:56 +0000 (0:00:06.558) 0:00:11.531 *********** 2025-06-22 20:10:44.937910 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:10:44.937921 | orchestrator | 2025-06-22 20:10:44.937932 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-22 20:10:44.937943 | orchestrator | Sunday 22 June 2025 20:08:59 +0000 (0:00:03.590) 0:00:15.121 *********** 2025-06-22 20:10:44.937953 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:10:44.937964 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-22 20:10:44.937975 | orchestrator | 2025-06-22 20:10:44.937987 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-22 20:10:44.937998 | orchestrator | Sunday 22 June 2025 20:09:03 +0000 (0:00:04.217) 0:00:19.339 *********** 2025-06-22 20:10:44.938008 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:10:44.938097 | orchestrator | 2025-06-22 20:10:44.938109 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-22 20:10:44.938120 | orchestrator | Sunday 22 June 2025 20:09:06 +0000 (0:00:02.833) 0:00:22.172 *********** 2025-06-22 20:10:44.938131 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-22 20:10:44.938142 | orchestrator | 2025-06-22 20:10:44.938152 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-22 20:10:44.938182 | orchestrator | Sunday 22 June 2025 20:09:10 +0000 (0:00:03.744) 0:00:25.917 *********** 2025-06-22 20:10:44.938193 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:44.938204 | orchestrator | 2025-06-22 20:10:44.938215 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-22 20:10:44.938226 | orchestrator | Sunday 22 June 2025 20:09:13 +0000 (0:00:02.878) 0:00:28.795 *********** 2025-06-22 20:10:44.938237 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:44.938247 | orchestrator | 2025-06-22 20:10:44.938258 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-22 20:10:44.938269 | orchestrator | Sunday 22 June 2025 20:09:17 +0000 (0:00:03.584) 0:00:32.380 *********** 2025-06-22 20:10:44.938292 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:44.938303 | orchestrator | 2025-06-22 20:10:44.938314 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-22 20:10:44.938324 | orchestrator | Sunday 22 June 2025 20:09:20 +0000 (0:00:03.594) 0:00:35.974 *********** 2025-06-22 20:10:44.938353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.938368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.938380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.938392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.938415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.938435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.938447 | orchestrator | 2025-06-22 20:10:44.938458 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-22 20:10:44.938469 | orchestrator | Sunday 22 June 2025 20:09:23 +0000 (0:00:02.533) 0:00:38.508 *********** 2025-06-22 20:10:44.938480 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:44.938491 | orchestrator | 2025-06-22 20:10:44.938502 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-22 20:10:44.938513 | orchestrator | Sunday 22 June 2025 20:09:23 +0000 (0:00:00.336) 0:00:38.844 *********** 2025-06-22 20:10:44.938524 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:44.938534 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:44.938545 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:44.938556 | orchestrator | 2025-06-22 20:10:44.938566 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-22 20:10:44.938577 | orchestrator | Sunday 22 June 2025 20:09:24 +0000 (0:00:01.374) 0:00:40.218 *********** 2025-06-22 20:10:44.938588 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:10:44.938599 | orchestrator | 2025-06-22 20:10:44.938609 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-22 20:10:44.938620 | orchestrator | Sunday 22 June 2025 20:09:27 +0000 (0:00:02.370) 0:00:42.595 *********** 2025-06-22 20:10:44.938632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.938650 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.938666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.938686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.938699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.938710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.938727 | orchestrator | 2025-06-22 20:10:44.938738 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-22 20:10:44.938749 | orchestrator | Sunday 22 June 2025 20:09:30 +0000 (0:00:03.438) 0:00:46.033 *********** 2025-06-22 20:10:44.938760 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:44.938771 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:44.938782 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:44.938793 | orchestrator | 2025-06-22 20:10:44.938804 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-22 20:10:44.938815 | orchestrator | Sunday 22 June 2025 20:09:31 +0000 (0:00:00.647) 0:00:46.680 *********** 2025-06-22 20:10:44.938825 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:10:44.938836 | orchestrator | 2025-06-22 20:10:44.938847 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-22 20:10:44.938858 | orchestrator | Sunday 22 June 2025 20:09:32 +0000 (0:00:01.553) 0:00:48.234 *********** 2025-06-22 20:10:44.938874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.938891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.938904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.938921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.938933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.938949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.938960 | orchestrator | 2025-06-22 20:10:44.938971 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-22 20:10:44.938982 | orchestrator | Sunday 22 June 2025 20:09:36 +0000 (0:00:03.543) 0:00:51.777 *********** 2025-06-22 20:10:44.939000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:44.939012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:44.939029 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:44.939041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:44.939097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:44.939117 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:44.939142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:44.939162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:44.939174 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:44.939185 | orchestrator | 2025-06-22 20:10:44.939196 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-22 20:10:44.939207 | orchestrator | Sunday 22 June 2025 20:09:37 +0000 (0:00:01.084) 0:00:52.861 *********** 2025-06-22 20:10:44.939219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:44.939244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:44.939256 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:44.939267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:44.939283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:44.939295 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:44.939313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:44.939336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:44.939357 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:44.939375 | orchestrator | 2025-06-22 20:10:44.939386 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-22 20:10:44.939397 | orchestrator | Sunday 22 June 2025 20:09:38 +0000 (0:00:01.114) 0:00:53.975 *********** 2025-06-22 20:10:44.939409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.939421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.939470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.939491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.939503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.939514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.939526 | orchestrator | 2025-06-22 20:10:44.939537 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-22 20:10:44.939548 | orchestrator | Sunday 22 June 2025 20:09:41 +0000 (0:00:02.690) 0:00:56.666 *********** 2025-06-22 20:10:44.939563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.939581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.939599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.939611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.939622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.939638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.939649 | orchestrator | 2025-06-22 20:10:44.939660 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-22 20:10:44.939671 | orchestrator | Sunday 22 June 2025 20:09:46 +0000 (0:00:04.805) 0:01:01.471 *********** 2025-06-22 20:10:44.939688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:44.939706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:44.939717 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:44.939728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:44.939740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:44.939751 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:44.939766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:44.939789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:44.939801 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:44.939812 | orchestrator | 2025-06-22 20:10:44.939823 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-22 20:10:44.939834 | orchestrator | Sunday 22 June 2025 20:09:46 +0000 (0:00:00.711) 0:01:02.183 *********** 2025-06-22 20:10:44.939846 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.939857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.939869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:44.939885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.939910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.939921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:44.939933 | orchestrator | 2025-06-22 20:10:44.939944 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-22 20:10:44.939955 | orchestrator | Sunday 22 June 2025 20:09:49 +0000 (0:00:02.442) 0:01:04.626 *********** 2025-06-22 20:10:44.939966 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:44.939977 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:44.939988 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:44.939999 | orchestrator | 2025-06-22 20:10:44.940009 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-22 20:10:44.940020 | orchestrator | Sunday 22 June 2025 20:09:49 +0000 (0:00:00.395) 0:01:05.021 *********** 2025-06-22 20:10:44.940031 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:44.940041 | orchestrator | 2025-06-22 20:10:44.940108 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-22 20:10:44.940120 | orchestrator | Sunday 22 June 2025 20:09:52 +0000 (0:00:02.361) 0:01:07.382 *********** 2025-06-22 20:10:44.940131 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:44.940141 | orchestrator | 2025-06-22 20:10:44.940152 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-22 20:10:44.940163 | orchestrator | Sunday 22 June 2025 20:09:54 +0000 (0:00:02.286) 0:01:09.668 *********** 2025-06-22 20:10:44.940174 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:44.940185 | orchestrator | 2025-06-22 20:10:44.940195 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-22 20:10:44.940206 | orchestrator | Sunday 22 June 2025 20:10:08 +0000 (0:00:14.351) 0:01:24.019 *********** 2025-06-22 20:10:44.940217 | orchestrator | 2025-06-22 20:10:44.940228 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-22 20:10:44.940238 | orchestrator | Sunday 22 June 2025 20:10:08 +0000 (0:00:00.050) 0:01:24.070 *********** 2025-06-22 20:10:44.940249 | orchestrator | 2025-06-22 20:10:44.940267 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-22 20:10:44.940278 | orchestrator | Sunday 22 June 2025 20:10:08 +0000 (0:00:00.049) 0:01:24.120 *********** 2025-06-22 20:10:44.940289 | orchestrator | 2025-06-22 20:10:44.940300 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-22 20:10:44.940311 | orchestrator | Sunday 22 June 2025 20:10:08 +0000 (0:00:00.049) 0:01:24.170 *********** 2025-06-22 20:10:44.940322 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:44.940332 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:10:44.940343 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:10:44.940354 | orchestrator | 2025-06-22 20:10:44.940365 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-22 20:10:44.940376 | orchestrator | Sunday 22 June 2025 20:10:30 +0000 (0:00:21.948) 0:01:46.120 *********** 2025-06-22 20:10:44.940387 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:44.940397 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:10:44.940408 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:10:44.940419 | orchestrator | 2025-06-22 20:10:44.940435 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:10:44.940447 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:10:44.940458 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:10:44.940469 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:10:44.940480 | orchestrator | 2025-06-22 20:10:44.940491 | orchestrator | 2025-06-22 20:10:44.940502 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:10:44.940513 | orchestrator | Sunday 22 June 2025 20:10:43 +0000 (0:00:12.962) 0:01:59.082 *********** 2025-06-22 20:10:44.940525 | orchestrator | =============================================================================== 2025-06-22 20:10:44.940536 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 21.95s 2025-06-22 20:10:44.940553 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 14.35s 2025-06-22 20:10:44.940564 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 12.96s 2025-06-22 20:10:44.940575 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.56s 2025-06-22 20:10:44.940586 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.81s 2025-06-22 20:10:44.940597 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.22s 2025-06-22 20:10:44.940608 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.74s 2025-06-22 20:10:44.940618 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.59s 2025-06-22 20:10:44.940628 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.59s 2025-06-22 20:10:44.940638 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.58s 2025-06-22 20:10:44.940647 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.54s 2025-06-22 20:10:44.940657 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.44s 2025-06-22 20:10:44.940666 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.44s 2025-06-22 20:10:44.940676 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.88s 2025-06-22 20:10:44.940686 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 2.83s 2025-06-22 20:10:44.940695 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.69s 2025-06-22 20:10:44.940705 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.53s 2025-06-22 20:10:44.940715 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.44s 2025-06-22 20:10:44.940733 | orchestrator | magnum : Check if kubeconfig file is supplied --------------------------- 2.38s 2025-06-22 20:10:44.940743 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.36s 2025-06-22 20:10:44.940753 | orchestrator | 2025-06-22 20:10:44 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:10:44.940763 | orchestrator | 2025-06-22 20:10:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:47.965085 | orchestrator | 2025-06-22 20:10:47 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:10:47.965951 | orchestrator | 2025-06-22 20:10:47 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:47.965982 | orchestrator | 2025-06-22 20:10:47 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:47.966532 | orchestrator | 2025-06-22 20:10:47 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:10:47.966558 | orchestrator | 2025-06-22 20:10:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:51.002917 | orchestrator | 2025-06-22 20:10:51 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:10:51.003036 | orchestrator | 2025-06-22 20:10:51 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:51.003605 | orchestrator | 2025-06-22 20:10:51 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:51.003964 | orchestrator | 2025-06-22 20:10:51 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:10:51.004001 | orchestrator | 2025-06-22 20:10:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:54.031495 | orchestrator | 2025-06-22 20:10:54 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:10:54.031577 | orchestrator | 2025-06-22 20:10:54 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:54.032254 | orchestrator | 2025-06-22 20:10:54 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:54.033246 | orchestrator | 2025-06-22 20:10:54 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:10:54.033260 | orchestrator | 2025-06-22 20:10:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:57.061532 | orchestrator | 2025-06-22 20:10:57 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:10:57.062107 | orchestrator | 2025-06-22 20:10:57 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:10:57.062472 | orchestrator | 2025-06-22 20:10:57 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:10:57.064807 | orchestrator | 2025-06-22 20:10:57 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:10:57.064870 | orchestrator | 2025-06-22 20:10:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:00.104224 | orchestrator | 2025-06-22 20:11:00 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:00.109262 | orchestrator | 2025-06-22 20:11:00 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:00.111420 | orchestrator | 2025-06-22 20:11:00 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:00.113174 | orchestrator | 2025-06-22 20:11:00 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:00.113505 | orchestrator | 2025-06-22 20:11:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:03.132997 | orchestrator | 2025-06-22 20:11:03 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:03.134005 | orchestrator | 2025-06-22 20:11:03 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:03.134876 | orchestrator | 2025-06-22 20:11:03 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:03.135448 | orchestrator | 2025-06-22 20:11:03 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:03.135476 | orchestrator | 2025-06-22 20:11:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:06.165859 | orchestrator | 2025-06-22 20:11:06 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:06.168540 | orchestrator | 2025-06-22 20:11:06 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:06.168978 | orchestrator | 2025-06-22 20:11:06 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:06.172481 | orchestrator | 2025-06-22 20:11:06 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:06.172537 | orchestrator | 2025-06-22 20:11:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:09.195262 | orchestrator | 2025-06-22 20:11:09 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:09.195467 | orchestrator | 2025-06-22 20:11:09 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:09.197869 | orchestrator | 2025-06-22 20:11:09 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:09.199061 | orchestrator | 2025-06-22 20:11:09 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:09.199090 | orchestrator | 2025-06-22 20:11:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:12.227302 | orchestrator | 2025-06-22 20:11:12 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:12.227397 | orchestrator | 2025-06-22 20:11:12 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:12.228210 | orchestrator | 2025-06-22 20:11:12 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:12.229020 | orchestrator | 2025-06-22 20:11:12 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:12.229920 | orchestrator | 2025-06-22 20:11:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:15.266894 | orchestrator | 2025-06-22 20:11:15 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:15.268984 | orchestrator | 2025-06-22 20:11:15 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:15.280222 | orchestrator | 2025-06-22 20:11:15 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:15.280297 | orchestrator | 2025-06-22 20:11:15 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:15.280312 | orchestrator | 2025-06-22 20:11:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:18.315499 | orchestrator | 2025-06-22 20:11:18 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:18.316602 | orchestrator | 2025-06-22 20:11:18 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:18.318106 | orchestrator | 2025-06-22 20:11:18 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:18.319285 | orchestrator | 2025-06-22 20:11:18 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:18.319311 | orchestrator | 2025-06-22 20:11:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:21.368216 | orchestrator | 2025-06-22 20:11:21 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:21.372309 | orchestrator | 2025-06-22 20:11:21 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:21.375093 | orchestrator | 2025-06-22 20:11:21 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:21.377591 | orchestrator | 2025-06-22 20:11:21 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:21.378147 | orchestrator | 2025-06-22 20:11:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:24.422658 | orchestrator | 2025-06-22 20:11:24 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:24.426559 | orchestrator | 2025-06-22 20:11:24 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:24.429363 | orchestrator | 2025-06-22 20:11:24 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:24.430908 | orchestrator | 2025-06-22 20:11:24 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:24.430941 | orchestrator | 2025-06-22 20:11:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:27.472225 | orchestrator | 2025-06-22 20:11:27 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:27.472872 | orchestrator | 2025-06-22 20:11:27 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:27.473593 | orchestrator | 2025-06-22 20:11:27 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:27.474461 | orchestrator | 2025-06-22 20:11:27 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:27.474491 | orchestrator | 2025-06-22 20:11:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:30.522496 | orchestrator | 2025-06-22 20:11:30 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:30.526771 | orchestrator | 2025-06-22 20:11:30 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:30.528911 | orchestrator | 2025-06-22 20:11:30 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:30.531116 | orchestrator | 2025-06-22 20:11:30 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:30.531160 | orchestrator | 2025-06-22 20:11:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:33.575696 | orchestrator | 2025-06-22 20:11:33 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:33.577712 | orchestrator | 2025-06-22 20:11:33 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:33.578437 | orchestrator | 2025-06-22 20:11:33 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:33.579197 | orchestrator | 2025-06-22 20:11:33 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:33.579390 | orchestrator | 2025-06-22 20:11:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:36.620244 | orchestrator | 2025-06-22 20:11:36 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:36.621744 | orchestrator | 2025-06-22 20:11:36 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:36.621812 | orchestrator | 2025-06-22 20:11:36 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:36.622762 | orchestrator | 2025-06-22 20:11:36 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:36.622850 | orchestrator | 2025-06-22 20:11:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:39.665385 | orchestrator | 2025-06-22 20:11:39 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:39.667068 | orchestrator | 2025-06-22 20:11:39 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:39.668122 | orchestrator | 2025-06-22 20:11:39 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:39.669514 | orchestrator | 2025-06-22 20:11:39 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:39.669546 | orchestrator | 2025-06-22 20:11:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:42.693972 | orchestrator | 2025-06-22 20:11:42 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:42.694628 | orchestrator | 2025-06-22 20:11:42 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:42.696157 | orchestrator | 2025-06-22 20:11:42 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:42.697553 | orchestrator | 2025-06-22 20:11:42 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:42.697583 | orchestrator | 2025-06-22 20:11:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:45.723471 | orchestrator | 2025-06-22 20:11:45 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:45.725541 | orchestrator | 2025-06-22 20:11:45 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:45.726073 | orchestrator | 2025-06-22 20:11:45 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:45.727499 | orchestrator | 2025-06-22 20:11:45 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:45.729046 | orchestrator | 2025-06-22 20:11:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:48.758303 | orchestrator | 2025-06-22 20:11:48 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:48.758405 | orchestrator | 2025-06-22 20:11:48 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:48.759068 | orchestrator | 2025-06-22 20:11:48 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state STARTED 2025-06-22 20:11:48.759862 | orchestrator | 2025-06-22 20:11:48 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:48.759894 | orchestrator | 2025-06-22 20:11:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:51.786528 | orchestrator | 2025-06-22 20:11:51 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:51.791153 | orchestrator | 2025-06-22 20:11:51 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:51.791219 | orchestrator | 2025-06-22 20:11:51 | INFO  | Task b0671601-5f22-4754-8404-0d41ace0295f is in state SUCCESS 2025-06-22 20:11:51.792431 | orchestrator | 2025-06-22 20:11:51.792498 | orchestrator | 2025-06-22 20:11:51.792513 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:11:51.792525 | orchestrator | 2025-06-22 20:11:51.792536 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:11:51.792568 | orchestrator | Sunday 22 June 2025 20:08:58 +0000 (0:00:00.208) 0:00:00.208 *********** 2025-06-22 20:11:51.792580 | orchestrator | ok: [testbed-manager] 2025-06-22 20:11:51.792590 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:11:51.792601 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:11:51.792845 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:11:51.792862 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:11:51.792872 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:11:51.792882 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:11:51.792892 | orchestrator | 2025-06-22 20:11:51.792902 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:11:51.792913 | orchestrator | Sunday 22 June 2025 20:08:59 +0000 (0:00:00.635) 0:00:00.843 *********** 2025-06-22 20:11:51.792923 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-22 20:11:51.792933 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-22 20:11:51.792943 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-22 20:11:51.792953 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-22 20:11:51.792964 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-22 20:11:51.792974 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-22 20:11:51.792983 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-22 20:11:51.792993 | orchestrator | 2025-06-22 20:11:51.793003 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-22 20:11:51.793033 | orchestrator | 2025-06-22 20:11:51.793043 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-22 20:11:51.793053 | orchestrator | Sunday 22 June 2025 20:08:59 +0000 (0:00:00.504) 0:00:01.347 *********** 2025-06-22 20:11:51.793322 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:11:51.793341 | orchestrator | 2025-06-22 20:11:51.793351 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-22 20:11:51.793362 | orchestrator | Sunday 22 June 2025 20:09:01 +0000 (0:00:01.210) 0:00:02.557 *********** 2025-06-22 20:11:51.793374 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 20:11:51.793388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.793398 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.793419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.793464 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.793477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.793494 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.793505 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.793515 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.793525 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.793535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.793552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.793587 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 20:11:51.793606 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.793617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.793628 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.793639 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.793655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.793665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.793698 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.793710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.793720 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.793735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.793745 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.793756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.793814 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.793826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.793859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.793871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.793881 | orchestrator | 2025-06-22 20:11:51.793891 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-22 20:11:51.793902 | orchestrator | Sunday 22 June 2025 20:09:04 +0000 (0:00:02.986) 0:00:05.544 *********** 2025-06-22 20:11:51.793912 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:11:51.793922 | orchestrator | 2025-06-22 20:11:51.793936 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-22 20:11:51.793947 | orchestrator | Sunday 22 June 2025 20:09:05 +0000 (0:00:01.342) 0:00:06.887 *********** 2025-06-22 20:11:51.793958 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 20:11:51.793971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.793989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.794000 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.794466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.794485 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.794496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.794521 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.794532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.794551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.794562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.794573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.794605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.794617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.794628 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.794644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.794655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.794671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.794681 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.794976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.795314 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.795334 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 20:11:51.795353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.795372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.795383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.795393 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.795403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.795444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.795456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.795467 | orchestrator | 2025-06-22 20:11:51.795477 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-22 20:11:51.795488 | orchestrator | Sunday 22 June 2025 20:09:10 +0000 (0:00:04.972) 0:00:11.859 *********** 2025-06-22 20:11:51.795503 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 20:11:51.795519 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.795529 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.795540 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 20:11:51.795577 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.795588 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:11:51.795598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.795606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.795623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.795632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.795641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.795649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.795657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.795704 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.795715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.795724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.795741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.795750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.795759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.795767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.795775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.795784 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:11:51.795792 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:11:51.795800 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:11:51.795831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.795841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.795857 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.795866 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:11:51.795874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.795883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.795891 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.795899 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:11:51.795907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.795916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.795948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.795963 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:11:51.795972 | orchestrator | 2025-06-22 20:11:51.795981 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-22 20:11:51.795990 | orchestrator | Sunday 22 June 2025 20:09:11 +0000 (0:00:01.444) 0:00:13.304 *********** 2025-06-22 20:11:51.796006 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 20:11:51.796031 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.796040 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.796050 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 20:11:51.796060 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.796093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.796108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.796121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.796131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.796140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.796150 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:11:51.796159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.796168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.796177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.796214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.796225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.796235 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:11:51.796244 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:11:51.796257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.796267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.796276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.796285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.796294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:11:51.796303 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:11:51.796332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.796348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.796356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.796365 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:11:51.796376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.796384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.796393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.796401 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:11:51.796409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:11:51.796417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.796451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:11:51.796461 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:11:51.796469 | orchestrator | 2025-06-22 20:11:51.796477 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-22 20:11:51.796485 | orchestrator | Sunday 22 June 2025 20:09:13 +0000 (0:00:01.691) 0:00:14.995 *********** 2025-06-22 20:11:51.796493 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 20:11:51.796505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.796514 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.796522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.796530 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.796543 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.796572 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.796581 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.796589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.796601 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.796610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.796619 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.796627 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.796640 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.796669 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.796678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.796687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.796698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.796707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.796715 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 20:11:51.796729 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.796757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.796766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.796775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.796786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.796795 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.796803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.796818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.796827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.796835 | orchestrator | 2025-06-22 20:11:51.796843 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-22 20:11:51.796852 | orchestrator | Sunday 22 June 2025 20:09:19 +0000 (0:00:05.540) 0:00:20.535 *********** 2025-06-22 20:11:51.796860 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:11:51.796868 | orchestrator | 2025-06-22 20:11:51.796876 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-22 20:11:51.796904 | orchestrator | Sunday 22 June 2025 20:09:19 +0000 (0:00:00.781) 0:00:21.317 *********** 2025-06-22 20:11:51.796914 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098353, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.796923 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098353, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.796936 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098353, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.796944 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098341, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.796958 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098353, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.796966 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098353, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.796995 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098341, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797004 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098353, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797027 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098341, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797039 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098325, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797048 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098353, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797061 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098325, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797069 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098341, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797099 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098325, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797108 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098341, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797116 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098327, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797128 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098341, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797141 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098341, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.797150 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098327, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797158 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098325, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797166 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098327, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797195 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098339, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797205 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098325, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797216 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098339, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797231 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098339, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797239 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098325, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797247 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098327, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797256 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098331, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1282203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797284 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098331, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1282203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797293 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098325, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.797305 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098331, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1282203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797318 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098327, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797326 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098327, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797335 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098337, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797343 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098337, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797372 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098339, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797381 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098337, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797393 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098339, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797406 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098344, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797415 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098339, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797423 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098344, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797431 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098331, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1282203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797459 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098344, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797469 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098327, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.797485 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098331, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1282203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797494 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098351, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797503 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098351, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797511 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098331, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1282203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797519 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098337, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797549 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098351, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797558 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098369, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797578 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098337, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797586 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098369, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797595 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098337, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797603 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098344, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797611 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098369, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797640 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098347, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797649 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098339, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.797665 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098344, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797674 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098347, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797682 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098344, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797690 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098351, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797699 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098351, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797727 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098347, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797741 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098329, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1272202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797752 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098329, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1272202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797761 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098369, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797769 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098351, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797778 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098331, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1282203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.797786 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098369, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797815 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098329, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1272202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797829 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098369, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797841 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098336, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797849 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098336, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797858 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098347, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797866 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098347, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797874 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098347, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797902 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098336, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797916 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098323, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797928 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098329, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1272202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797936 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098323, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797945 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098337, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1312203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.797953 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098329, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1272202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797961 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098329, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1272202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797973 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098323, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797986 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098340, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.797998 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098340, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798006 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098336, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798055 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098336, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798064 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098367, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798072 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098344, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.798093 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098340, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798102 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098336, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798114 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098367, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798122 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098323, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798131 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098333, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798139 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098323, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798147 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098340, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798165 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098333, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798173 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098367, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798187 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098323, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798196 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098356, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1342204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798204 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:11:51.798212 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098351, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.798220 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098333, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798229 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098367, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798246 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098356, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1342204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798255 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:11:51.798263 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098340, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798275 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098340, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798284 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098356, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1342204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798292 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:11:51.798300 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098333, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798308 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098367, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798321 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098367, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798333 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098356, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1342204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798342 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:11:51.798350 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098333, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798361 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098333, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798369 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098369, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.798377 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098356, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1342204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798386 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098356, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1342204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:11:51.798402 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:11:51.798410 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:11:51.798418 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098347, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1332204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.798431 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098329, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1272202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.798439 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098336, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.798451 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098323, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1262202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.798459 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098340, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1322203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.798468 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098367, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1362205, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.798483 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098333, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1292202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.798492 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098356, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1342204, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:11:51.798500 | orchestrator | 2025-06-22 20:11:51.798508 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-22 20:11:51.798517 | orchestrator | Sunday 22 June 2025 20:09:46 +0000 (0:00:26.398) 0:00:47.715 *********** 2025-06-22 20:11:51.798529 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:11:51.798538 | orchestrator | 2025-06-22 20:11:51.798545 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-22 20:11:51.798553 | orchestrator | Sunday 22 June 2025 20:09:47 +0000 (0:00:00.723) 0:00:48.439 *********** 2025-06-22 20:11:51.798562 | orchestrator | [WARNING]: Skipped 2025-06-22 20:11:51.798570 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798578 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-22 20:11:51.798586 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798594 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-22 20:11:51.798602 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:11:51.798610 | orchestrator | [WARNING]: Skipped 2025-06-22 20:11:51.798618 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798626 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-22 20:11:51.798634 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798642 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-22 20:11:51.798650 | orchestrator | [WARNING]: Skipped 2025-06-22 20:11:51.798658 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798666 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-22 20:11:51.798674 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798682 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-22 20:11:51.798690 | orchestrator | [WARNING]: Skipped 2025-06-22 20:11:51.798698 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798709 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-22 20:11:51.798717 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798725 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-22 20:11:51.798733 | orchestrator | [WARNING]: Skipped 2025-06-22 20:11:51.798741 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798754 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-22 20:11:51.798762 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798769 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-22 20:11:51.798777 | orchestrator | [WARNING]: Skipped 2025-06-22 20:11:51.798785 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798793 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-22 20:11:51.798801 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798809 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-22 20:11:51.798817 | orchestrator | [WARNING]: Skipped 2025-06-22 20:11:51.798824 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798832 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-22 20:11:51.798840 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:11:51.798848 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-22 20:11:51.798856 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:11:51.798864 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 20:11:51.798872 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 20:11:51.798880 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 20:11:51.798887 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 20:11:51.798895 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 20:11:51.798903 | orchestrator | 2025-06-22 20:11:51.798911 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-22 20:11:51.798919 | orchestrator | Sunday 22 June 2025 20:09:48 +0000 (0:00:01.567) 0:00:50.007 *********** 2025-06-22 20:11:51.798927 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:11:51.798935 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:11:51.798943 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:11:51.798951 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:11:51.798959 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:11:51.798967 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:11:51.798975 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:11:51.798983 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:11:51.798991 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:11:51.798999 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:11:51.799007 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:11:51.799047 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:11:51.799056 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-22 20:11:51.799064 | orchestrator | 2025-06-22 20:11:51.799072 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-22 20:11:51.799080 | orchestrator | Sunday 22 June 2025 20:10:01 +0000 (0:00:12.434) 0:01:02.441 *********** 2025-06-22 20:11:51.799092 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:11:51.799101 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:11:51.799109 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:11:51.799117 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:11:51.799125 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:11:51.799133 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:11:51.799146 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:11:51.799154 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:11:51.799162 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:11:51.799170 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:11:51.799177 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:11:51.799185 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:11:51.799193 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-22 20:11:51.799201 | orchestrator | 2025-06-22 20:11:51.799209 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-22 20:11:51.799217 | orchestrator | Sunday 22 June 2025 20:10:03 +0000 (0:00:02.458) 0:01:04.900 *********** 2025-06-22 20:11:51.799225 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:11:51.799234 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:11:51.799245 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:11:51.799254 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:11:51.799262 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:11:51.799270 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:11:51.799278 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:11:51.799286 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:11:51.799293 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:11:51.799301 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:11:51.799309 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-22 20:11:51.799317 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:11:51.799325 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:11:51.799333 | orchestrator | 2025-06-22 20:11:51.799341 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-22 20:11:51.799349 | orchestrator | Sunday 22 June 2025 20:10:05 +0000 (0:00:01.661) 0:01:06.561 *********** 2025-06-22 20:11:51.799357 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:11:51.799364 | orchestrator | 2025-06-22 20:11:51.799372 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-22 20:11:51.799380 | orchestrator | Sunday 22 June 2025 20:10:05 +0000 (0:00:00.742) 0:01:07.304 *********** 2025-06-22 20:11:51.799388 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:11:51.799396 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:11:51.799403 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:11:51.799411 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:11:51.799419 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:11:51.799427 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:11:51.799436 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:11:51.799450 | orchestrator | 2025-06-22 20:11:51.799460 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-22 20:11:51.799468 | orchestrator | Sunday 22 June 2025 20:10:06 +0000 (0:00:00.962) 0:01:08.266 *********** 2025-06-22 20:11:51.799476 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:11:51.799484 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:11:51.799490 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:11:51.799501 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:11:51.799508 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:11:51.799514 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:11:51.799521 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:11:51.799527 | orchestrator | 2025-06-22 20:11:51.799534 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-22 20:11:51.799541 | orchestrator | Sunday 22 June 2025 20:10:09 +0000 (0:00:02.220) 0:01:10.486 *********** 2025-06-22 20:11:51.799547 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:11:51.799554 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:11:51.799561 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:11:51.799567 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:11:51.799574 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:11:51.799581 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:11:51.799587 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:11:51.799594 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:11:51.799604 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:11:51.799611 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:11:51.799617 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:11:51.799624 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:11:51.799631 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:11:51.799637 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:11:51.799644 | orchestrator | 2025-06-22 20:11:51.799651 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-22 20:11:51.799658 | orchestrator | Sunday 22 June 2025 20:10:11 +0000 (0:00:02.498) 0:01:12.985 *********** 2025-06-22 20:11:51.799664 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:11:51.799671 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:11:51.799678 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:11:51.799685 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:11:51.799691 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:11:51.799698 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:11:51.799705 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-22 20:11:51.799712 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:11:51.799718 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:11:51.799728 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:11:51.799735 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:11:51.799742 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:11:51.799749 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:11:51.799755 | orchestrator | 2025-06-22 20:11:51.799762 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-22 20:11:51.799769 | orchestrator | Sunday 22 June 2025 20:10:13 +0000 (0:00:02.163) 0:01:15.149 *********** 2025-06-22 20:11:51.799775 | orchestrator | [WARNING]: Skipped 2025-06-22 20:11:51.799782 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-22 20:11:51.799789 | orchestrator | due to this access issue: 2025-06-22 20:11:51.799800 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-22 20:11:51.799806 | orchestrator | not a directory 2025-06-22 20:11:51.799813 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:11:51.799819 | orchestrator | 2025-06-22 20:11:51.799826 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-22 20:11:51.799833 | orchestrator | Sunday 22 June 2025 20:10:14 +0000 (0:00:01.191) 0:01:16.340 *********** 2025-06-22 20:11:51.799839 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:11:51.799846 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:11:51.799853 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:11:51.799859 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:11:51.799866 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:11:51.799873 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:11:51.799879 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:11:51.799886 | orchestrator | 2025-06-22 20:11:51.799892 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-22 20:11:51.799899 | orchestrator | Sunday 22 June 2025 20:10:15 +0000 (0:00:01.029) 0:01:17.369 *********** 2025-06-22 20:11:51.799906 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:11:51.799913 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:11:51.799919 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:11:51.799926 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:11:51.799932 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:11:51.799939 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:11:51.799945 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:11:51.799952 | orchestrator | 2025-06-22 20:11:51.799959 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-22 20:11:51.799965 | orchestrator | Sunday 22 June 2025 20:10:16 +0000 (0:00:00.722) 0:01:18.092 *********** 2025-06-22 20:11:51.799972 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 20:11:51.799983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.799991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.799998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.800025 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.800033 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.800040 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.800047 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:11:51.800054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.800065 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.800072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.800080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.800094 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.800102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.800109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.800116 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 20:11:51.800127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.800134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.800145 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.800155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.800162 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.800169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.800176 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.800183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.800194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.800201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:11:51.800214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.800222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.800229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:11:51.800236 | orchestrator | 2025-06-22 20:11:51.800243 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-22 20:11:51.800250 | orchestrator | Sunday 22 June 2025 20:10:20 +0000 (0:00:04.199) 0:01:22.292 *********** 2025-06-22 20:11:51.800257 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-22 20:11:51.800263 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:11:51.800270 | orchestrator | 2025-06-22 20:11:51.800277 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:11:51.800284 | orchestrator | Sunday 22 June 2025 20:10:22 +0000 (0:00:01.214) 0:01:23.506 *********** 2025-06-22 20:11:51.800290 | orchestrator | 2025-06-22 20:11:51.800297 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:11:51.800304 | orchestrator | Sunday 22 June 2025 20:10:22 +0000 (0:00:00.219) 0:01:23.726 *********** 2025-06-22 20:11:51.800310 | orchestrator | 2025-06-22 20:11:51.800317 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:11:51.800324 | orchestrator | Sunday 22 June 2025 20:10:22 +0000 (0:00:00.064) 0:01:23.791 *********** 2025-06-22 20:11:51.800330 | orchestrator | 2025-06-22 20:11:51.800337 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:11:51.800344 | orchestrator | Sunday 22 June 2025 20:10:22 +0000 (0:00:00.063) 0:01:23.854 *********** 2025-06-22 20:11:51.800350 | orchestrator | 2025-06-22 20:11:51.800357 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:11:51.800363 | orchestrator | Sunday 22 June 2025 20:10:22 +0000 (0:00:00.063) 0:01:23.917 *********** 2025-06-22 20:11:51.800370 | orchestrator | 2025-06-22 20:11:51.800377 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:11:51.800383 | orchestrator | Sunday 22 June 2025 20:10:22 +0000 (0:00:00.059) 0:01:23.977 *********** 2025-06-22 20:11:51.800390 | orchestrator | 2025-06-22 20:11:51.800397 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:11:51.800407 | orchestrator | Sunday 22 June 2025 20:10:22 +0000 (0:00:00.064) 0:01:24.042 *********** 2025-06-22 20:11:51.800414 | orchestrator | 2025-06-22 20:11:51.800420 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-22 20:11:51.800427 | orchestrator | Sunday 22 June 2025 20:10:22 +0000 (0:00:00.086) 0:01:24.129 *********** 2025-06-22 20:11:51.800434 | orchestrator | changed: [testbed-manager] 2025-06-22 20:11:51.800440 | orchestrator | 2025-06-22 20:11:51.800447 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-22 20:11:51.800456 | orchestrator | Sunday 22 June 2025 20:10:36 +0000 (0:00:14.262) 0:01:38.392 *********** 2025-06-22 20:11:51.800464 | orchestrator | changed: [testbed-manager] 2025-06-22 20:11:51.800470 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:11:51.800477 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:11:51.800484 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:11:51.800490 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:11:51.800497 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:11:51.800504 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:11:51.800510 | orchestrator | 2025-06-22 20:11:51.800517 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-22 20:11:51.800524 | orchestrator | Sunday 22 June 2025 20:10:49 +0000 (0:00:12.521) 0:01:50.913 *********** 2025-06-22 20:11:51.800530 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:11:51.800537 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:11:51.800543 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:11:51.800550 | orchestrator | 2025-06-22 20:11:51.800557 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-22 20:11:51.800564 | orchestrator | Sunday 22 June 2025 20:10:55 +0000 (0:00:06.044) 0:01:56.958 *********** 2025-06-22 20:11:51.800570 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:11:51.800577 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:11:51.800584 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:11:51.800591 | orchestrator | 2025-06-22 20:11:51.800597 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-22 20:11:51.800604 | orchestrator | Sunday 22 June 2025 20:11:01 +0000 (0:00:05.624) 0:02:02.582 *********** 2025-06-22 20:11:51.800611 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:11:51.800617 | orchestrator | changed: [testbed-manager] 2025-06-22 20:11:51.800624 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:11:51.800631 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:11:51.800638 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:11:51.800644 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:11:51.800651 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:11:51.800657 | orchestrator | 2025-06-22 20:11:51.800667 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-22 20:11:51.800674 | orchestrator | Sunday 22 June 2025 20:11:16 +0000 (0:00:15.622) 0:02:18.205 *********** 2025-06-22 20:11:51.800680 | orchestrator | changed: [testbed-manager] 2025-06-22 20:11:51.800687 | orchestrator | 2025-06-22 20:11:51.800694 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-22 20:11:51.800700 | orchestrator | Sunday 22 June 2025 20:11:25 +0000 (0:00:09.005) 0:02:27.211 *********** 2025-06-22 20:11:51.800707 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:11:51.800714 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:11:51.800720 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:11:51.800727 | orchestrator | 2025-06-22 20:11:51.800734 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-22 20:11:51.800740 | orchestrator | Sunday 22 June 2025 20:11:32 +0000 (0:00:06.250) 0:02:33.461 *********** 2025-06-22 20:11:51.800747 | orchestrator | changed: [testbed-manager] 2025-06-22 20:11:51.800754 | orchestrator | 2025-06-22 20:11:51.800760 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-22 20:11:51.800767 | orchestrator | Sunday 22 June 2025 20:11:38 +0000 (0:00:06.843) 0:02:40.304 *********** 2025-06-22 20:11:51.800778 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:11:51.800784 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:11:51.800791 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:11:51.800798 | orchestrator | 2025-06-22 20:11:51.800804 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:11:51.800811 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 20:11:51.800818 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:11:51.800825 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:11:51.800832 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:11:51.800838 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 20:11:51.800845 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 20:11:51.800852 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 20:11:51.800858 | orchestrator | 2025-06-22 20:11:51.800865 | orchestrator | 2025-06-22 20:11:51.800872 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:11:51.800878 | orchestrator | Sunday 22 June 2025 20:11:50 +0000 (0:00:12.026) 0:02:52.331 *********** 2025-06-22 20:11:51.800885 | orchestrator | =============================================================================== 2025-06-22 20:11:51.800892 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 26.40s 2025-06-22 20:11:51.800898 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.62s 2025-06-22 20:11:51.800905 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 14.26s 2025-06-22 20:11:51.800912 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 12.52s 2025-06-22 20:11:51.800922 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 12.43s 2025-06-22 20:11:51.800929 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 12.03s 2025-06-22 20:11:51.800936 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.01s 2025-06-22 20:11:51.800942 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 6.84s 2025-06-22 20:11:51.800949 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 6.25s 2025-06-22 20:11:51.800956 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 6.04s 2025-06-22 20:11:51.800962 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.62s 2025-06-22 20:11:51.800969 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.54s 2025-06-22 20:11:51.800976 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 4.97s 2025-06-22 20:11:51.800982 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.20s 2025-06-22 20:11:51.800989 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.99s 2025-06-22 20:11:51.800995 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.50s 2025-06-22 20:11:51.801002 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.46s 2025-06-22 20:11:51.801009 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.22s 2025-06-22 20:11:51.801041 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.16s 2025-06-22 20:11:51.801053 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.69s 2025-06-22 20:11:51.801060 | orchestrator | 2025-06-22 20:11:51 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:51.801069 | orchestrator | 2025-06-22 20:11:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:54.813128 | orchestrator | 2025-06-22 20:11:54 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:54.813251 | orchestrator | 2025-06-22 20:11:54 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:54.813658 | orchestrator | 2025-06-22 20:11:54 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:54.814290 | orchestrator | 2025-06-22 20:11:54 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:11:54.814325 | orchestrator | 2025-06-22 20:11:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:57.841693 | orchestrator | 2025-06-22 20:11:57 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:11:57.843575 | orchestrator | 2025-06-22 20:11:57 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:11:57.845824 | orchestrator | 2025-06-22 20:11:57 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:11:57.847875 | orchestrator | 2025-06-22 20:11:57 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:11:57.847943 | orchestrator | 2025-06-22 20:11:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:00.882915 | orchestrator | 2025-06-22 20:12:00 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:00.884930 | orchestrator | 2025-06-22 20:12:00 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:12:00.887149 | orchestrator | 2025-06-22 20:12:00 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:00.888553 | orchestrator | 2025-06-22 20:12:00 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:00.888901 | orchestrator | 2025-06-22 20:12:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:03.928456 | orchestrator | 2025-06-22 20:12:03 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:03.930589 | orchestrator | 2025-06-22 20:12:03 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:12:03.933687 | orchestrator | 2025-06-22 20:12:03 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:03.936695 | orchestrator | 2025-06-22 20:12:03 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:03.936965 | orchestrator | 2025-06-22 20:12:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:06.977227 | orchestrator | 2025-06-22 20:12:06 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:06.978291 | orchestrator | 2025-06-22 20:12:06 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:12:06.979716 | orchestrator | 2025-06-22 20:12:06 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:06.981931 | orchestrator | 2025-06-22 20:12:06 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:06.982102 | orchestrator | 2025-06-22 20:12:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:10.039925 | orchestrator | 2025-06-22 20:12:10 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:10.040163 | orchestrator | 2025-06-22 20:12:10 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:12:10.041033 | orchestrator | 2025-06-22 20:12:10 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:10.041848 | orchestrator | 2025-06-22 20:12:10 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:10.041866 | orchestrator | 2025-06-22 20:12:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:13.081033 | orchestrator | 2025-06-22 20:12:13 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:13.081908 | orchestrator | 2025-06-22 20:12:13 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state STARTED 2025-06-22 20:12:13.082822 | orchestrator | 2025-06-22 20:12:13 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:13.083784 | orchestrator | 2025-06-22 20:12:13 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:13.083827 | orchestrator | 2025-06-22 20:12:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:16.149377 | orchestrator | 2025-06-22 20:12:16 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:16.149767 | orchestrator | 2025-06-22 20:12:16 | INFO  | Task dac71a50-3a36-48f8-8e9b-729b2066114b is in state SUCCESS 2025-06-22 20:12:16.151512 | orchestrator | 2025-06-22 20:12:16.151685 | orchestrator | 2025-06-22 20:12:16.151704 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:12:16.151719 | orchestrator | 2025-06-22 20:12:16.151731 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:12:16.151743 | orchestrator | Sunday 22 June 2025 20:09:38 +0000 (0:00:00.229) 0:00:00.229 *********** 2025-06-22 20:12:16.151756 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:12:16.151769 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:12:16.151781 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:12:16.151792 | orchestrator | 2025-06-22 20:12:16.151804 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:12:16.151816 | orchestrator | Sunday 22 June 2025 20:09:39 +0000 (0:00:00.338) 0:00:00.569 *********** 2025-06-22 20:12:16.151828 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-22 20:12:16.151840 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-22 20:12:16.151852 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-22 20:12:16.151864 | orchestrator | 2025-06-22 20:12:16.151876 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-22 20:12:16.151888 | orchestrator | 2025-06-22 20:12:16.151899 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 20:12:16.151911 | orchestrator | Sunday 22 June 2025 20:09:39 +0000 (0:00:00.410) 0:00:00.979 *********** 2025-06-22 20:12:16.151923 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:12:16.151936 | orchestrator | 2025-06-22 20:12:16.152028 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-22 20:12:16.152044 | orchestrator | Sunday 22 June 2025 20:09:40 +0000 (0:00:00.562) 0:00:01.542 *********** 2025-06-22 20:12:16.152056 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-22 20:12:16.152069 | orchestrator | 2025-06-22 20:12:16.152080 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-22 20:12:16.152092 | orchestrator | Sunday 22 June 2025 20:09:43 +0000 (0:00:03.219) 0:00:04.761 *********** 2025-06-22 20:12:16.152103 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-22 20:12:16.152115 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-22 20:12:16.152151 | orchestrator | 2025-06-22 20:12:16.152163 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-22 20:12:16.152175 | orchestrator | Sunday 22 June 2025 20:09:49 +0000 (0:00:06.026) 0:00:10.788 *********** 2025-06-22 20:12:16.152186 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:12:16.152198 | orchestrator | 2025-06-22 20:12:16.152210 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-22 20:12:16.152221 | orchestrator | Sunday 22 June 2025 20:09:53 +0000 (0:00:03.563) 0:00:14.351 *********** 2025-06-22 20:12:16.152232 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:12:16.152244 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-22 20:12:16.152255 | orchestrator | 2025-06-22 20:12:16.152266 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-22 20:12:16.152277 | orchestrator | Sunday 22 June 2025 20:09:56 +0000 (0:00:03.809) 0:00:18.160 *********** 2025-06-22 20:12:16.152288 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:12:16.152300 | orchestrator | 2025-06-22 20:12:16.152311 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-22 20:12:16.152322 | orchestrator | Sunday 22 June 2025 20:10:00 +0000 (0:00:03.214) 0:00:21.374 *********** 2025-06-22 20:12:16.152333 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-22 20:12:16.152344 | orchestrator | 2025-06-22 20:12:16.152355 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-22 20:12:16.152366 | orchestrator | Sunday 22 June 2025 20:10:04 +0000 (0:00:04.308) 0:00:25.683 *********** 2025-06-22 20:12:16.152411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:16.152513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:16.152537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:16.152549 | orchestrator | 2025-06-22 20:12:16.152566 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 20:12:16.152578 | orchestrator | Sunday 22 June 2025 20:10:08 +0000 (0:00:04.121) 0:00:29.805 *********** 2025-06-22 20:12:16.152589 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:12:16.152601 | orchestrator | 2025-06-22 20:12:16.152622 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-22 20:12:16.152634 | orchestrator | Sunday 22 June 2025 20:10:09 +0000 (0:00:00.633) 0:00:30.438 *********** 2025-06-22 20:12:16.152645 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:16.152656 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:16.152667 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:16.152678 | orchestrator | 2025-06-22 20:12:16.152689 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-22 20:12:16.152700 | orchestrator | Sunday 22 June 2025 20:10:14 +0000 (0:00:05.801) 0:00:36.239 *********** 2025-06-22 20:12:16.152711 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:16.152729 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:16.152740 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:16.152752 | orchestrator | 2025-06-22 20:12:16.152763 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-22 20:12:16.152774 | orchestrator | Sunday 22 June 2025 20:10:16 +0000 (0:00:01.496) 0:00:37.736 *********** 2025-06-22 20:12:16.152785 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:16.152796 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:16.152807 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:16.152818 | orchestrator | 2025-06-22 20:12:16.152829 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-22 20:12:16.152840 | orchestrator | Sunday 22 June 2025 20:10:17 +0000 (0:00:01.124) 0:00:38.860 *********** 2025-06-22 20:12:16.152851 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:12:16.152862 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:12:16.152873 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:12:16.152884 | orchestrator | 2025-06-22 20:12:16.152895 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-22 20:12:16.152906 | orchestrator | Sunday 22 June 2025 20:10:18 +0000 (0:00:00.746) 0:00:39.607 *********** 2025-06-22 20:12:16.152917 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:16.152928 | orchestrator | 2025-06-22 20:12:16.152938 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-22 20:12:16.152949 | orchestrator | Sunday 22 June 2025 20:10:18 +0000 (0:00:00.098) 0:00:39.705 *********** 2025-06-22 20:12:16.152960 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:16.152994 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:16.153005 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:16.153016 | orchestrator | 2025-06-22 20:12:16.153027 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 20:12:16.153038 | orchestrator | Sunday 22 June 2025 20:10:18 +0000 (0:00:00.290) 0:00:39.996 *********** 2025-06-22 20:12:16.153049 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:12:16.153060 | orchestrator | 2025-06-22 20:12:16.153071 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-22 20:12:16.153082 | orchestrator | Sunday 22 June 2025 20:10:19 +0000 (0:00:00.503) 0:00:40.500 *********** 2025-06-22 20:12:16.153106 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:16.153128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:16.153143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:16.153156 | orchestrator | 2025-06-22 20:12:16.153169 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-22 20:12:16.153187 | orchestrator | Sunday 22 June 2025 20:10:22 +0000 (0:00:03.662) 0:00:44.163 *********** 2025-06-22 20:12:16.153215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:12:16.153229 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:16.153244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:12:16.153257 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:16.153284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:12:16.153305 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:16.153317 | orchestrator | 2025-06-22 20:12:16.153330 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-22 20:12:16.153343 | orchestrator | Sunday 22 June 2025 20:10:26 +0000 (0:00:03.466) 0:00:47.629 *********** 2025-06-22 20:12:16.153355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:12:16.153367 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:16.153390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:12:16.153411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:12:16.153423 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:16.153435 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:16.153446 | orchestrator | 2025-06-22 20:12:16.153457 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-22 20:12:16.153468 | orchestrator | Sunday 22 June 2025 20:10:29 +0000 (0:00:03.195) 0:00:50.825 *********** 2025-06-22 20:12:16.153479 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:16.153490 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:16.153501 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:16.153512 | orchestrator | 2025-06-22 20:12:16.153523 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-22 20:12:16.153534 | orchestrator | Sunday 22 June 2025 20:10:34 +0000 (0:00:05.448) 0:00:56.273 *********** 2025-06-22 20:12:16.153556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:16.153585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:16.153598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:16.153617 | orchestrator | 2025-06-22 20:12:16.153628 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-22 20:12:16.153639 | orchestrator | Sunday 22 June 2025 20:10:40 +0000 (0:00:05.212) 0:01:01.486 *********** 2025-06-22 20:12:16.153650 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:16.153661 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:16.153672 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:16.153682 | orchestrator | 2025-06-22 20:12:16.153693 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-22 20:12:16.153724 | orchestrator | Sunday 22 June 2025 20:10:46 +0000 (0:00:06.349) 0:01:07.836 *********** 2025-06-22 20:12:16.153748 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:16.153760 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:16.153771 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:16.153782 | orchestrator | 2025-06-22 20:12:16.153793 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-22 20:12:16.153810 | orchestrator | Sunday 22 June 2025 20:10:49 +0000 (0:00:02.735) 0:01:10.571 *********** 2025-06-22 20:12:16.153821 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:16.153832 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:16.153843 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:16.153854 | orchestrator | 2025-06-22 20:12:16.153865 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-22 20:12:16.153876 | orchestrator | Sunday 22 June 2025 20:10:53 +0000 (0:00:03.948) 0:01:14.519 *********** 2025-06-22 20:12:16.153887 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:16.153898 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:16.153909 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:16.153920 | orchestrator | 2025-06-22 20:12:16.153931 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-22 20:12:16.153942 | orchestrator | Sunday 22 June 2025 20:10:55 +0000 (0:00:02.657) 0:01:17.177 *********** 2025-06-22 20:12:16.153952 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:16.153963 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:16.153996 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:16.154007 | orchestrator | 2025-06-22 20:12:16.154072 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-22 20:12:16.154086 | orchestrator | Sunday 22 June 2025 20:10:59 +0000 (0:00:03.418) 0:01:20.595 *********** 2025-06-22 20:12:16.154097 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:16.154108 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:16.154119 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:16.154130 | orchestrator | 2025-06-22 20:12:16.154141 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-22 20:12:16.154152 | orchestrator | Sunday 22 June 2025 20:10:59 +0000 (0:00:00.282) 0:01:20.878 *********** 2025-06-22 20:12:16.154163 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-22 20:12:16.154174 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:16.154185 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-22 20:12:16.154196 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:16.154215 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-22 20:12:16.154227 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:16.154238 | orchestrator | 2025-06-22 20:12:16.154249 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-22 20:12:16.154260 | orchestrator | Sunday 22 June 2025 20:11:03 +0000 (0:00:03.912) 0:01:24.790 *********** 2025-06-22 20:12:16.154272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:16.154300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:16.154314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:16.154334 | orchestrator | 2025-06-22 20:12:16.154345 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 20:12:16.154356 | orchestrator | Sunday 22 June 2025 20:11:09 +0000 (0:00:05.756) 0:01:30.547 *********** 2025-06-22 20:12:16.154367 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:16.154378 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:16.154389 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:16.154400 | orchestrator | 2025-06-22 20:12:16.154410 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-22 20:12:16.154421 | orchestrator | Sunday 22 June 2025 20:11:09 +0000 (0:00:00.303) 0:01:30.851 *********** 2025-06-22 20:12:16.154432 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:16.154443 | orchestrator | 2025-06-22 20:12:16.154454 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-22 20:12:16.154465 | orchestrator | Sunday 22 June 2025 20:11:11 +0000 (0:00:02.364) 0:01:33.215 *********** 2025-06-22 20:12:16.154476 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:16.154487 | orchestrator | 2025-06-22 20:12:16.154498 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-22 20:12:16.154509 | orchestrator | Sunday 22 June 2025 20:11:14 +0000 (0:00:02.429) 0:01:35.645 *********** 2025-06-22 20:12:16.154520 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:16.154531 | orchestrator | 2025-06-22 20:12:16.154542 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-22 20:12:16.154553 | orchestrator | Sunday 22 June 2025 20:11:16 +0000 (0:00:01.851) 0:01:37.497 *********** 2025-06-22 20:12:16.154568 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:16.154580 | orchestrator | 2025-06-22 20:12:16.154591 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-22 20:12:16.154602 | orchestrator | Sunday 22 June 2025 20:11:44 +0000 (0:00:28.225) 0:02:05.723 *********** 2025-06-22 20:12:16.154613 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:16.154624 | orchestrator | 2025-06-22 20:12:16.154641 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-22 20:12:16.154652 | orchestrator | Sunday 22 June 2025 20:11:46 +0000 (0:00:02.353) 0:02:08.076 *********** 2025-06-22 20:12:16.154663 | orchestrator | 2025-06-22 20:12:16.154674 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-22 20:12:16.154685 | orchestrator | Sunday 22 June 2025 20:11:46 +0000 (0:00:00.074) 0:02:08.151 *********** 2025-06-22 20:12:16.154704 | orchestrator | 2025-06-22 20:12:16.154715 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-22 20:12:16.154726 | orchestrator | Sunday 22 June 2025 20:11:46 +0000 (0:00:00.119) 0:02:08.271 *********** 2025-06-22 20:12:16.154737 | orchestrator | 2025-06-22 20:12:16.154748 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-22 20:12:16.154759 | orchestrator | Sunday 22 June 2025 20:11:47 +0000 (0:00:00.125) 0:02:08.396 *********** 2025-06-22 20:12:16.154770 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:16.154781 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:16.154792 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:16.154803 | orchestrator | 2025-06-22 20:12:16.154814 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:12:16.154826 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 20:12:16.154839 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:12:16.154850 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:12:16.154861 | orchestrator | 2025-06-22 20:12:16.154872 | orchestrator | 2025-06-22 20:12:16.154883 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:12:16.154894 | orchestrator | Sunday 22 June 2025 20:12:14 +0000 (0:00:27.144) 0:02:35.542 *********** 2025-06-22 20:12:16.154905 | orchestrator | =============================================================================== 2025-06-22 20:12:16.154916 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.23s 2025-06-22 20:12:16.154927 | orchestrator | glance : Restart glance-api container ---------------------------------- 27.15s 2025-06-22 20:12:16.154938 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.35s 2025-06-22 20:12:16.154948 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.03s 2025-06-22 20:12:16.154959 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 5.80s 2025-06-22 20:12:16.155041 | orchestrator | glance : Check glance containers ---------------------------------------- 5.76s 2025-06-22 20:12:16.155052 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.45s 2025-06-22 20:12:16.155063 | orchestrator | glance : Copying over config.json files for services -------------------- 5.21s 2025-06-22 20:12:16.155074 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.31s 2025-06-22 20:12:16.155085 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.12s 2025-06-22 20:12:16.155096 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.95s 2025-06-22 20:12:16.155107 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.91s 2025-06-22 20:12:16.155118 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.81s 2025-06-22 20:12:16.155128 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.66s 2025-06-22 20:12:16.155139 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.56s 2025-06-22 20:12:16.155150 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 3.47s 2025-06-22 20:12:16.155161 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.42s 2025-06-22 20:12:16.155171 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.22s 2025-06-22 20:12:16.155182 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.21s 2025-06-22 20:12:16.155193 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.20s 2025-06-22 20:12:16.155204 | orchestrator | 2025-06-22 20:12:16 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:16.155223 | orchestrator | 2025-06-22 20:12:16 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:16.155234 | orchestrator | 2025-06-22 20:12:16 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:16.155246 | orchestrator | 2025-06-22 20:12:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:19.205426 | orchestrator | 2025-06-22 20:12:19 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:19.208857 | orchestrator | 2025-06-22 20:12:19 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:19.209561 | orchestrator | 2025-06-22 20:12:19 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:19.210421 | orchestrator | 2025-06-22 20:12:19 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:19.210442 | orchestrator | 2025-06-22 20:12:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:22.259626 | orchestrator | 2025-06-22 20:12:22 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:22.260710 | orchestrator | 2025-06-22 20:12:22 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:22.262648 | orchestrator | 2025-06-22 20:12:22 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:22.264053 | orchestrator | 2025-06-22 20:12:22 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:22.264079 | orchestrator | 2025-06-22 20:12:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:25.312731 | orchestrator | 2025-06-22 20:12:25 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:25.314187 | orchestrator | 2025-06-22 20:12:25 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:25.315402 | orchestrator | 2025-06-22 20:12:25 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:25.318084 | orchestrator | 2025-06-22 20:12:25 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:25.318443 | orchestrator | 2025-06-22 20:12:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:28.366874 | orchestrator | 2025-06-22 20:12:28 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:28.368664 | orchestrator | 2025-06-22 20:12:28 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:28.369887 | orchestrator | 2025-06-22 20:12:28 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:28.371159 | orchestrator | 2025-06-22 20:12:28 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:28.371286 | orchestrator | 2025-06-22 20:12:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:31.413556 | orchestrator | 2025-06-22 20:12:31 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:31.414549 | orchestrator | 2025-06-22 20:12:31 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:31.415995 | orchestrator | 2025-06-22 20:12:31 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:31.417060 | orchestrator | 2025-06-22 20:12:31 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:31.417093 | orchestrator | 2025-06-22 20:12:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:34.460338 | orchestrator | 2025-06-22 20:12:34 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:34.462799 | orchestrator | 2025-06-22 20:12:34 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:34.464747 | orchestrator | 2025-06-22 20:12:34 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:34.466838 | orchestrator | 2025-06-22 20:12:34 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:34.466868 | orchestrator | 2025-06-22 20:12:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:37.511366 | orchestrator | 2025-06-22 20:12:37 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:37.512188 | orchestrator | 2025-06-22 20:12:37 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:37.514577 | orchestrator | 2025-06-22 20:12:37 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:37.515872 | orchestrator | 2025-06-22 20:12:37 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:37.516011 | orchestrator | 2025-06-22 20:12:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:40.579660 | orchestrator | 2025-06-22 20:12:40 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:40.585010 | orchestrator | 2025-06-22 20:12:40 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:40.585073 | orchestrator | 2025-06-22 20:12:40 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:40.588347 | orchestrator | 2025-06-22 20:12:40 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:40.588893 | orchestrator | 2025-06-22 20:12:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:43.638046 | orchestrator | 2025-06-22 20:12:43 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:43.639211 | orchestrator | 2025-06-22 20:12:43 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:43.640691 | orchestrator | 2025-06-22 20:12:43 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:43.642381 | orchestrator | 2025-06-22 20:12:43 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:43.642540 | orchestrator | 2025-06-22 20:12:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:46.686513 | orchestrator | 2025-06-22 20:12:46 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:46.687617 | orchestrator | 2025-06-22 20:12:46 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:46.690204 | orchestrator | 2025-06-22 20:12:46 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:46.691897 | orchestrator | 2025-06-22 20:12:46 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:46.691989 | orchestrator | 2025-06-22 20:12:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:49.740202 | orchestrator | 2025-06-22 20:12:49 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:49.742107 | orchestrator | 2025-06-22 20:12:49 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:49.743512 | orchestrator | 2025-06-22 20:12:49 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:49.745399 | orchestrator | 2025-06-22 20:12:49 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:49.745469 | orchestrator | 2025-06-22 20:12:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:52.790863 | orchestrator | 2025-06-22 20:12:52 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:52.793453 | orchestrator | 2025-06-22 20:12:52 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:52.795489 | orchestrator | 2025-06-22 20:12:52 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:52.797775 | orchestrator | 2025-06-22 20:12:52 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:52.797956 | orchestrator | 2025-06-22 20:12:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:55.842558 | orchestrator | 2025-06-22 20:12:55 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:55.844090 | orchestrator | 2025-06-22 20:12:55 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:55.846004 | orchestrator | 2025-06-22 20:12:55 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:55.847806 | orchestrator | 2025-06-22 20:12:55 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:55.847833 | orchestrator | 2025-06-22 20:12:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:58.894488 | orchestrator | 2025-06-22 20:12:58 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:12:58.895709 | orchestrator | 2025-06-22 20:12:58 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:12:58.897290 | orchestrator | 2025-06-22 20:12:58 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:12:58.898647 | orchestrator | 2025-06-22 20:12:58 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:12:58.898673 | orchestrator | 2025-06-22 20:12:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:01.939499 | orchestrator | 2025-06-22 20:13:01 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:01.941308 | orchestrator | 2025-06-22 20:13:01 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:13:01.942492 | orchestrator | 2025-06-22 20:13:01 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:13:01.944197 | orchestrator | 2025-06-22 20:13:01 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:01.944276 | orchestrator | 2025-06-22 20:13:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:04.992663 | orchestrator | 2025-06-22 20:13:04 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:04.995067 | orchestrator | 2025-06-22 20:13:04 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:13:04.997881 | orchestrator | 2025-06-22 20:13:04 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:13:05.000802 | orchestrator | 2025-06-22 20:13:04 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:05.001225 | orchestrator | 2025-06-22 20:13:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:08.042521 | orchestrator | 2025-06-22 20:13:08 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:08.043586 | orchestrator | 2025-06-22 20:13:08 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state STARTED 2025-06-22 20:13:08.044050 | orchestrator | 2025-06-22 20:13:08 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:13:08.045328 | orchestrator | 2025-06-22 20:13:08 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:08.045371 | orchestrator | 2025-06-22 20:13:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:11.104453 | orchestrator | 2025-06-22 20:13:11 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:11.107537 | orchestrator | 2025-06-22 20:13:11.107598 | orchestrator | 2025-06-22 20:13:11.107614 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:13:11.107629 | orchestrator | 2025-06-22 20:13:11.107643 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:13:11.107657 | orchestrator | Sunday 22 June 2025 20:12:18 +0000 (0:00:00.257) 0:00:00.257 *********** 2025-06-22 20:13:11.107670 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:13:11.107685 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:13:11.107698 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:13:11.107711 | orchestrator | 2025-06-22 20:13:11.107724 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:13:11.107737 | orchestrator | Sunday 22 June 2025 20:12:19 +0000 (0:00:00.290) 0:00:00.547 *********** 2025-06-22 20:13:11.107750 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-22 20:13:11.107763 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-22 20:13:11.107776 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-22 20:13:11.107789 | orchestrator | 2025-06-22 20:13:11.107802 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-22 20:13:11.107815 | orchestrator | 2025-06-22 20:13:11.107829 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-22 20:13:11.107842 | orchestrator | Sunday 22 June 2025 20:12:19 +0000 (0:00:00.428) 0:00:00.976 *********** 2025-06-22 20:13:11.107855 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:13:11.107869 | orchestrator | 2025-06-22 20:13:11.107882 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-22 20:13:11.107966 | orchestrator | Sunday 22 June 2025 20:12:20 +0000 (0:00:00.539) 0:00:01.515 *********** 2025-06-22 20:13:11.107983 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-22 20:13:11.107999 | orchestrator | 2025-06-22 20:13:11.108013 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-22 20:13:11.108122 | orchestrator | Sunday 22 June 2025 20:12:23 +0000 (0:00:03.101) 0:00:04.616 *********** 2025-06-22 20:13:11.108136 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-22 20:13:11.108148 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-22 20:13:11.108160 | orchestrator | 2025-06-22 20:13:11.108172 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-22 20:13:11.108184 | orchestrator | Sunday 22 June 2025 20:12:29 +0000 (0:00:06.024) 0:00:10.641 *********** 2025-06-22 20:13:11.108196 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:13:11.108209 | orchestrator | 2025-06-22 20:13:11.108220 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-22 20:13:11.108233 | orchestrator | Sunday 22 June 2025 20:12:32 +0000 (0:00:03.472) 0:00:14.113 *********** 2025-06-22 20:13:11.108243 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:13:11.108256 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-22 20:13:11.108268 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-22 20:13:11.108280 | orchestrator | 2025-06-22 20:13:11.108292 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-22 20:13:11.108328 | orchestrator | Sunday 22 June 2025 20:12:40 +0000 (0:00:08.051) 0:00:22.165 *********** 2025-06-22 20:13:11.108341 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:13:11.108353 | orchestrator | 2025-06-22 20:13:11.108365 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-22 20:13:11.108390 | orchestrator | Sunday 22 June 2025 20:12:43 +0000 (0:00:03.095) 0:00:25.260 *********** 2025-06-22 20:13:11.108403 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-22 20:13:11.108415 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-22 20:13:11.108427 | orchestrator | 2025-06-22 20:13:11.108438 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-22 20:13:11.108450 | orchestrator | Sunday 22 June 2025 20:12:50 +0000 (0:00:06.421) 0:00:31.681 *********** 2025-06-22 20:13:11.108462 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-22 20:13:11.108473 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-22 20:13:11.108485 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-22 20:13:11.108497 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-22 20:13:11.108508 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-22 20:13:11.108520 | orchestrator | 2025-06-22 20:13:11.108532 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-22 20:13:11.108544 | orchestrator | Sunday 22 June 2025 20:13:04 +0000 (0:00:13.962) 0:00:45.644 *********** 2025-06-22 20:13:11.108556 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:13:11.108568 | orchestrator | 2025-06-22 20:13:11.108580 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-22 20:13:11.108592 | orchestrator | Sunday 22 June 2025 20:13:04 +0000 (0:00:00.544) 0:00:46.188 *********** 2025-06-22 20:13:11.108606 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-06-22 20:13:11.108652 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1750623186.0425785-6620-64881208643801/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1750623186.0425785-6620-64881208643801/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1750623186.0425785-6620-64881208643801/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_7fb39zoi/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_7fb39zoi/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_7fb39zoi/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_7fb39zoi/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-06-22 20:13:11.108677 | orchestrator | 2025-06-22 20:13:11.108689 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:13:11.108701 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-06-22 20:13:11.108713 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:13:11.108726 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:13:11.108733 | orchestrator | 2025-06-22 20:13:11.108740 | orchestrator | 2025-06-22 20:13:11.108748 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:13:11.108756 | orchestrator | Sunday 22 June 2025 20:13:07 +0000 (0:00:03.036) 0:00:49.224 *********** 2025-06-22 20:13:11.108768 | orchestrator | =============================================================================== 2025-06-22 20:13:11.108775 | orchestrator | octavia : Adding octavia related roles --------------------------------- 13.96s 2025-06-22 20:13:11.108782 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.05s 2025-06-22 20:13:11.108790 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 6.42s 2025-06-22 20:13:11.108797 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.02s 2025-06-22 20:13:11.108804 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.47s 2025-06-22 20:13:11.108812 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.10s 2025-06-22 20:13:11.108819 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.10s 2025-06-22 20:13:11.108826 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.04s 2025-06-22 20:13:11.108834 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.54s 2025-06-22 20:13:11.108841 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.54s 2025-06-22 20:13:11.108848 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.43s 2025-06-22 20:13:11.108856 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-06-22 20:13:11.108869 | orchestrator | 2025-06-22 20:13:11 | INFO  | Task 73093d54-b9bf-47b3-94d5-6c0d51cca6d5 is in state SUCCESS 2025-06-22 20:13:11.108877 | orchestrator | 2025-06-22 20:13:11 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:11.110321 | orchestrator | 2025-06-22 20:13:11 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:13:11.111748 | orchestrator | 2025-06-22 20:13:11 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:11.111974 | orchestrator | 2025-06-22 20:13:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:14.156073 | orchestrator | 2025-06-22 20:13:14 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:14.156885 | orchestrator | 2025-06-22 20:13:14 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:14.157927 | orchestrator | 2025-06-22 20:13:14 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:13:14.158613 | orchestrator | 2025-06-22 20:13:14 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:14.158741 | orchestrator | 2025-06-22 20:13:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:17.203710 | orchestrator | 2025-06-22 20:13:17 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:17.208024 | orchestrator | 2025-06-22 20:13:17 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:17.210434 | orchestrator | 2025-06-22 20:13:17 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state STARTED 2025-06-22 20:13:17.213433 | orchestrator | 2025-06-22 20:13:17 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:17.214071 | orchestrator | 2025-06-22 20:13:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:20.258406 | orchestrator | 2025-06-22 20:13:20 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:20.260432 | orchestrator | 2025-06-22 20:13:20 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:20.265028 | orchestrator | 2025-06-22 20:13:20 | INFO  | Task 371a191d-3d7e-4e8d-addf-12d09651fc4b is in state SUCCESS 2025-06-22 20:13:20.267729 | orchestrator | 2025-06-22 20:13:20.267777 | orchestrator | 2025-06-22 20:13:20.267790 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:13:20.267803 | orchestrator | 2025-06-22 20:13:20.267814 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:13:20.267826 | orchestrator | Sunday 22 June 2025 20:10:29 +0000 (0:00:00.246) 0:00:00.246 *********** 2025-06-22 20:13:20.267837 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:13:20.267849 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:13:20.267860 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:13:20.267871 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:13:20.267882 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:13:20.267920 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:13:20.267932 | orchestrator | 2025-06-22 20:13:20.268109 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:13:20.268122 | orchestrator | Sunday 22 June 2025 20:10:30 +0000 (0:00:00.879) 0:00:01.127 *********** 2025-06-22 20:13:20.268133 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-22 20:13:20.268145 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-22 20:13:20.268156 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-22 20:13:20.268167 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-22 20:13:20.268177 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-22 20:13:20.268217 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-22 20:13:20.268228 | orchestrator | 2025-06-22 20:13:20.268239 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-22 20:13:20.268250 | orchestrator | 2025-06-22 20:13:20.268261 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:13:20.268272 | orchestrator | Sunday 22 June 2025 20:10:31 +0000 (0:00:01.133) 0:00:02.261 *********** 2025-06-22 20:13:20.268283 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:13:20.268296 | orchestrator | 2025-06-22 20:13:20.268307 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-22 20:13:20.268320 | orchestrator | Sunday 22 June 2025 20:10:34 +0000 (0:00:02.806) 0:00:05.067 *********** 2025-06-22 20:13:20.268333 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-22 20:13:20.268345 | orchestrator | 2025-06-22 20:13:20.268416 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-22 20:13:20.268978 | orchestrator | Sunday 22 June 2025 20:10:38 +0000 (0:00:03.753) 0:00:08.820 *********** 2025-06-22 20:13:20.268990 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-22 20:13:20.269002 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-22 20:13:20.269013 | orchestrator | 2025-06-22 20:13:20.269024 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-22 20:13:20.269035 | orchestrator | Sunday 22 June 2025 20:10:45 +0000 (0:00:07.228) 0:00:16.048 *********** 2025-06-22 20:13:20.269046 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:13:20.269057 | orchestrator | 2025-06-22 20:13:20.269068 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-22 20:13:20.269079 | orchestrator | Sunday 22 June 2025 20:10:49 +0000 (0:00:03.419) 0:00:19.468 *********** 2025-06-22 20:13:20.269089 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:13:20.269100 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-22 20:13:20.269111 | orchestrator | 2025-06-22 20:13:20.269122 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-22 20:13:20.269133 | orchestrator | Sunday 22 June 2025 20:10:53 +0000 (0:00:04.150) 0:00:23.618 *********** 2025-06-22 20:13:20.269145 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:13:20.269155 | orchestrator | 2025-06-22 20:13:20.269166 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-22 20:13:20.269177 | orchestrator | Sunday 22 June 2025 20:10:56 +0000 (0:00:03.743) 0:00:27.362 *********** 2025-06-22 20:13:20.269188 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-22 20:13:20.269199 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-22 20:13:20.269210 | orchestrator | 2025-06-22 20:13:20.269220 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-22 20:13:20.269231 | orchestrator | Sunday 22 June 2025 20:11:06 +0000 (0:00:09.287) 0:00:36.649 *********** 2025-06-22 20:13:20.269261 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.269328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.269343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.269355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.269367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.269385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.269431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.269444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.269456 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.269469 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.269480 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.269496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.269517 | orchestrator | 2025-06-22 20:13:20.269553 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:13:20.269566 | orchestrator | Sunday 22 June 2025 20:11:08 +0000 (0:00:02.581) 0:00:39.231 *********** 2025-06-22 20:13:20.269577 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:20.269590 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:20.269602 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:20.269614 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:13:20.269626 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:13:20.269637 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:13:20.269712 | orchestrator | 2025-06-22 20:13:20.269727 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:13:20.269740 | orchestrator | Sunday 22 June 2025 20:11:09 +0000 (0:00:00.454) 0:00:39.686 *********** 2025-06-22 20:13:20.269752 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:20.269764 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:20.269776 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:20.269787 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:13:20.269800 | orchestrator | 2025-06-22 20:13:20.269812 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-22 20:13:20.269825 | orchestrator | Sunday 22 June 2025 20:11:09 +0000 (0:00:00.700) 0:00:40.386 *********** 2025-06-22 20:13:20.269837 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-22 20:13:20.269849 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-22 20:13:20.269861 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-22 20:13:20.269873 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-22 20:13:20.269885 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-22 20:13:20.269951 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-22 20:13:20.269963 | orchestrator | 2025-06-22 20:13:20.269975 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-22 20:13:20.269985 | orchestrator | Sunday 22 June 2025 20:11:11 +0000 (0:00:01.599) 0:00:41.986 *********** 2025-06-22 20:13:20.269998 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:13:20.270011 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:13:20.270101 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:13:20.270158 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:13:20.270172 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:13:20.270183 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:13:20.270195 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:13:20.270231 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:13:20.270274 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:13:20.270288 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:13:20.270301 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:13:20.270312 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:13:20.270331 | orchestrator | 2025-06-22 20:13:20.270342 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-22 20:13:20.270353 | orchestrator | Sunday 22 June 2025 20:11:14 +0000 (0:00:03.391) 0:00:45.377 *********** 2025-06-22 20:13:20.270365 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:13:20.270377 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:13:20.270388 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:13:20.270399 | orchestrator | 2025-06-22 20:13:20.270410 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-22 20:13:20.270426 | orchestrator | Sunday 22 June 2025 20:11:16 +0000 (0:00:01.663) 0:00:47.040 *********** 2025-06-22 20:13:20.270437 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-22 20:13:20.270448 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-22 20:13:20.270459 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-22 20:13:20.270469 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:13:20.270480 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:13:20.270521 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:13:20.270534 | orchestrator | 2025-06-22 20:13:20.270545 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-22 20:13:20.270556 | orchestrator | Sunday 22 June 2025 20:11:19 +0000 (0:00:02.711) 0:00:49.752 *********** 2025-06-22 20:13:20.270566 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-22 20:13:20.270578 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-22 20:13:20.270589 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-22 20:13:20.270600 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-22 20:13:20.270610 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-22 20:13:20.270621 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-22 20:13:20.270632 | orchestrator | 2025-06-22 20:13:20.270643 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-22 20:13:20.270654 | orchestrator | Sunday 22 June 2025 20:11:20 +0000 (0:00:01.133) 0:00:50.886 *********** 2025-06-22 20:13:20.270665 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:20.270675 | orchestrator | 2025-06-22 20:13:20.270686 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-22 20:13:20.270697 | orchestrator | Sunday 22 June 2025 20:11:20 +0000 (0:00:00.128) 0:00:51.014 *********** 2025-06-22 20:13:20.270708 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:20.270718 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:20.270729 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:20.270740 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:13:20.270751 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:13:20.270762 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:13:20.270772 | orchestrator | 2025-06-22 20:13:20.270783 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:13:20.270794 | orchestrator | Sunday 22 June 2025 20:11:21 +0000 (0:00:00.736) 0:00:51.750 *********** 2025-06-22 20:13:20.270814 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:13:20.270826 | orchestrator | 2025-06-22 20:13:20.270837 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-22 20:13:20.270848 | orchestrator | Sunday 22 June 2025 20:11:22 +0000 (0:00:01.270) 0:00:53.021 *********** 2025-06-22 20:13:20.270859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.270871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.271005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.271026 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271037 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271058 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271144 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271163 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271175 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271186 | orchestrator | 2025-06-22 20:13:20.271197 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-22 20:13:20.271208 | orchestrator | Sunday 22 June 2025 20:11:25 +0000 (0:00:02.815) 0:00:55.837 *********** 2025-06-22 20:13:20.271220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:13:20.271240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:13:20.271267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:13:20.271288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271298 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:20.271308 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:20.271317 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:20.271332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271350 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271365 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:13:20.271376 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271396 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:13:20.271406 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271421 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271431 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:13:20.271441 | orchestrator | 2025-06-22 20:13:20.271450 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-22 20:13:20.271460 | orchestrator | Sunday 22 June 2025 20:11:27 +0000 (0:00:01.923) 0:00:57.761 *********** 2025-06-22 20:13:20.271476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:13:20.271493 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271503 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:20.271513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:13:20.271523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271533 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:20.271551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:13:20.271594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271604 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:13:20.271614 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:20.271624 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271644 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:13:20.271663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.271690 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:13:20.271700 | orchestrator | 2025-06-22 20:13:20.271710 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-22 20:13:20.271720 | orchestrator | Sunday 22 June 2025 20:11:29 +0000 (0:00:02.111) 0:00:59.872 *********** 2025-06-22 20:13:20.271730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.271740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.271754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.271777 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271798 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271878 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.271910 | orchestrator | 2025-06-22 20:13:20.271921 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-22 20:13:20.271931 | orchestrator | Sunday 22 June 2025 20:11:32 +0000 (0:00:02.920) 0:01:02.793 *********** 2025-06-22 20:13:20.271941 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-22 20:13:20.271951 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:13:20.271961 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-22 20:13:20.271970 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:13:20.271980 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-22 20:13:20.271990 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:13:20.272000 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-22 20:13:20.272009 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-22 20:13:20.272019 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-22 20:13:20.272035 | orchestrator | 2025-06-22 20:13:20.272045 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-22 20:13:20.272054 | orchestrator | Sunday 22 June 2025 20:11:34 +0000 (0:00:02.140) 0:01:04.933 *********** 2025-06-22 20:13:20.272069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.272086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.272096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.272107 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272117 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272142 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272183 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272199 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272223 | orchestrator | 2025-06-22 20:13:20.272328 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-22 20:13:20.272339 | orchestrator | Sunday 22 June 2025 20:11:43 +0000 (0:00:08.805) 0:01:13.738 *********** 2025-06-22 20:13:20.272356 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:20.272366 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:20.272376 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:20.272386 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:13:20.272395 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:13:20.272405 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:13:20.272414 | orchestrator | 2025-06-22 20:13:20.272424 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-22 20:13:20.272433 | orchestrator | Sunday 22 June 2025 20:11:45 +0000 (0:00:01.784) 0:01:15.523 *********** 2025-06-22 20:13:20.272443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:13:20.272454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.272464 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:20.272474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:13:20.272491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.272501 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:20.272524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:13:20.272535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.272545 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:20.272555 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.272565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.272581 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:13:20.272591 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.272606 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.272621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.272632 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:13:20.272642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:13:20.272652 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:13:20.272661 | orchestrator | 2025-06-22 20:13:20.272671 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-22 20:13:20.272687 | orchestrator | Sunday 22 June 2025 20:11:45 +0000 (0:00:00.825) 0:01:16.348 *********** 2025-06-22 20:13:20.272697 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:20.272707 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:20.272716 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:20.272726 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:13:20.272735 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:13:20.272745 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:13:20.272754 | orchestrator | 2025-06-22 20:13:20.272764 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-22 20:13:20.272774 | orchestrator | Sunday 22 June 2025 20:11:46 +0000 (0:00:00.652) 0:01:17.000 *********** 2025-06-22 20:13:20.272784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.272799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.272816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272826 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:13:20.272853 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272952 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272981 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.272992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:13:20.273002 | orchestrator | 2025-06-22 20:13:20.273012 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:13:20.273022 | orchestrator | Sunday 22 June 2025 20:11:48 +0000 (0:00:02.154) 0:01:19.155 *********** 2025-06-22 20:13:20.273032 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:20.273042 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:13:20.273051 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:13:20.273061 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:13:20.273071 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:13:20.273080 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:13:20.273090 | orchestrator | 2025-06-22 20:13:20.273100 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-22 20:13:20.273109 | orchestrator | Sunday 22 June 2025 20:11:49 +0000 (0:00:00.773) 0:01:19.929 *********** 2025-06-22 20:13:20.273119 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:13:20.273127 | orchestrator | 2025-06-22 20:13:20.273135 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-22 20:13:20.273147 | orchestrator | Sunday 22 June 2025 20:11:51 +0000 (0:00:02.101) 0:01:22.030 *********** 2025-06-22 20:13:20.273156 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:13:20.273163 | orchestrator | 2025-06-22 20:13:20.273171 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-22 20:13:20.273179 | orchestrator | Sunday 22 June 2025 20:11:53 +0000 (0:00:02.332) 0:01:24.363 *********** 2025-06-22 20:13:20.273187 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:13:20.273195 | orchestrator | 2025-06-22 20:13:20.273203 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:13:20.273211 | orchestrator | Sunday 22 June 2025 20:12:12 +0000 (0:00:18.381) 0:01:42.745 *********** 2025-06-22 20:13:20.273219 | orchestrator | 2025-06-22 20:13:20.273231 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:13:20.273239 | orchestrator | Sunday 22 June 2025 20:12:12 +0000 (0:00:00.060) 0:01:42.806 *********** 2025-06-22 20:13:20.273254 | orchestrator | 2025-06-22 20:13:20.273262 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:13:20.273270 | orchestrator | Sunday 22 June 2025 20:12:12 +0000 (0:00:00.060) 0:01:42.866 *********** 2025-06-22 20:13:20.273278 | orchestrator | 2025-06-22 20:13:20.273285 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:13:20.273293 | orchestrator | Sunday 22 June 2025 20:12:12 +0000 (0:00:00.059) 0:01:42.925 *********** 2025-06-22 20:13:20.273301 | orchestrator | 2025-06-22 20:13:20.273309 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:13:20.273317 | orchestrator | Sunday 22 June 2025 20:12:12 +0000 (0:00:00.059) 0:01:42.985 *********** 2025-06-22 20:13:20.273325 | orchestrator | 2025-06-22 20:13:20.273333 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:13:20.273340 | orchestrator | Sunday 22 June 2025 20:12:12 +0000 (0:00:00.056) 0:01:43.041 *********** 2025-06-22 20:13:20.273348 | orchestrator | 2025-06-22 20:13:20.273356 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-22 20:13:20.273364 | orchestrator | Sunday 22 June 2025 20:12:12 +0000 (0:00:00.063) 0:01:43.104 *********** 2025-06-22 20:13:20.273372 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:13:20.273380 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:13:20.273387 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:13:20.273395 | orchestrator | 2025-06-22 20:13:20.273403 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-22 20:13:20.273411 | orchestrator | Sunday 22 June 2025 20:12:32 +0000 (0:00:19.536) 0:02:02.641 *********** 2025-06-22 20:13:20.273419 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:13:20.273427 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:13:20.273435 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:13:20.273443 | orchestrator | 2025-06-22 20:13:20.273451 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-22 20:13:20.273459 | orchestrator | Sunday 22 June 2025 20:12:37 +0000 (0:00:05.200) 0:02:07.842 *********** 2025-06-22 20:13:20.273466 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:13:20.273474 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:13:20.273482 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:13:20.273490 | orchestrator | 2025-06-22 20:13:20.273498 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-22 20:13:20.273506 | orchestrator | Sunday 22 June 2025 20:13:11 +0000 (0:00:34.287) 0:02:42.129 *********** 2025-06-22 20:13:20.273513 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:13:20.273521 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:13:20.273529 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:13:20.273537 | orchestrator | 2025-06-22 20:13:20.273545 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-22 20:13:20.273553 | orchestrator | Sunday 22 June 2025 20:13:18 +0000 (0:00:06.361) 0:02:48.491 *********** 2025-06-22 20:13:20.273561 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:13:20.273568 | orchestrator | 2025-06-22 20:13:20.273576 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:13:20.273585 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:13:20.273594 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:13:20.273602 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:13:20.273610 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 20:13:20.273618 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 20:13:20.273631 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 20:13:20.273639 | orchestrator | 2025-06-22 20:13:20.273647 | orchestrator | 2025-06-22 20:13:20.273655 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:13:20.273663 | orchestrator | Sunday 22 June 2025 20:13:18 +0000 (0:00:00.610) 0:02:49.102 *********** 2025-06-22 20:13:20.273671 | orchestrator | =============================================================================== 2025-06-22 20:13:20.273678 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 34.29s 2025-06-22 20:13:20.273686 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 19.54s 2025-06-22 20:13:20.273698 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.38s 2025-06-22 20:13:20.273706 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 9.29s 2025-06-22 20:13:20.273714 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 8.81s 2025-06-22 20:13:20.273721 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.23s 2025-06-22 20:13:20.273729 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.36s 2025-06-22 20:13:20.273737 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.20s 2025-06-22 20:13:20.273750 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.15s 2025-06-22 20:13:20.273758 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.75s 2025-06-22 20:13:20.273793 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.74s 2025-06-22 20:13:20.273804 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.42s 2025-06-22 20:13:20.273812 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.39s 2025-06-22 20:13:20.273820 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.92s 2025-06-22 20:13:20.273827 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.82s 2025-06-22 20:13:20.273835 | orchestrator | cinder : include_tasks -------------------------------------------------- 2.81s 2025-06-22 20:13:20.273843 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.71s 2025-06-22 20:13:20.273851 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.58s 2025-06-22 20:13:20.273859 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.33s 2025-06-22 20:13:20.273867 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.15s 2025-06-22 20:13:20.273875 | orchestrator | 2025-06-22 20:13:20 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:20.273883 | orchestrator | 2025-06-22 20:13:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:23.304698 | orchestrator | 2025-06-22 20:13:23 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:23.308557 | orchestrator | 2025-06-22 20:13:23 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:23.310522 | orchestrator | 2025-06-22 20:13:23 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:23.310554 | orchestrator | 2025-06-22 20:13:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:26.366393 | orchestrator | 2025-06-22 20:13:26 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:26.367197 | orchestrator | 2025-06-22 20:13:26 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:26.368847 | orchestrator | 2025-06-22 20:13:26 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:26.369045 | orchestrator | 2025-06-22 20:13:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:29.426683 | orchestrator | 2025-06-22 20:13:29 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:29.432130 | orchestrator | 2025-06-22 20:13:29 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:29.435876 | orchestrator | 2025-06-22 20:13:29 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:29.435948 | orchestrator | 2025-06-22 20:13:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:32.480090 | orchestrator | 2025-06-22 20:13:32 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:32.481650 | orchestrator | 2025-06-22 20:13:32 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:32.483427 | orchestrator | 2025-06-22 20:13:32 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:32.483457 | orchestrator | 2025-06-22 20:13:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:35.532606 | orchestrator | 2025-06-22 20:13:35 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:35.534701 | orchestrator | 2025-06-22 20:13:35 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:35.536711 | orchestrator | 2025-06-22 20:13:35 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:35.536757 | orchestrator | 2025-06-22 20:13:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:38.581585 | orchestrator | 2025-06-22 20:13:38 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:38.583832 | orchestrator | 2025-06-22 20:13:38 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:38.585146 | orchestrator | 2025-06-22 20:13:38 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:38.585185 | orchestrator | 2025-06-22 20:13:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:41.622974 | orchestrator | 2025-06-22 20:13:41 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:41.624426 | orchestrator | 2025-06-22 20:13:41 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:41.625522 | orchestrator | 2025-06-22 20:13:41 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:41.625551 | orchestrator | 2025-06-22 20:13:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:44.673672 | orchestrator | 2025-06-22 20:13:44 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:44.675964 | orchestrator | 2025-06-22 20:13:44 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:44.680319 | orchestrator | 2025-06-22 20:13:44 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:44.680348 | orchestrator | 2025-06-22 20:13:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:47.720445 | orchestrator | 2025-06-22 20:13:47 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:47.723023 | orchestrator | 2025-06-22 20:13:47 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:47.725407 | orchestrator | 2025-06-22 20:13:47 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:47.725476 | orchestrator | 2025-06-22 20:13:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:50.767360 | orchestrator | 2025-06-22 20:13:50 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:50.767813 | orchestrator | 2025-06-22 20:13:50 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:50.768689 | orchestrator | 2025-06-22 20:13:50 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:50.768729 | orchestrator | 2025-06-22 20:13:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:53.811938 | orchestrator | 2025-06-22 20:13:53 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:53.813398 | orchestrator | 2025-06-22 20:13:53 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:53.814283 | orchestrator | 2025-06-22 20:13:53 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:53.814322 | orchestrator | 2025-06-22 20:13:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:56.863572 | orchestrator | 2025-06-22 20:13:56 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:56.864964 | orchestrator | 2025-06-22 20:13:56 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:56.866356 | orchestrator | 2025-06-22 20:13:56 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:56.866383 | orchestrator | 2025-06-22 20:13:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:59.912769 | orchestrator | 2025-06-22 20:13:59 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:13:59.913242 | orchestrator | 2025-06-22 20:13:59 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:13:59.919719 | orchestrator | 2025-06-22 20:13:59 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:13:59.919755 | orchestrator | 2025-06-22 20:13:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:02.964252 | orchestrator | 2025-06-22 20:14:02 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:02.964470 | orchestrator | 2025-06-22 20:14:02 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:02.965440 | orchestrator | 2025-06-22 20:14:02 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:02.965465 | orchestrator | 2025-06-22 20:14:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:06.002718 | orchestrator | 2025-06-22 20:14:05 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:06.004909 | orchestrator | 2025-06-22 20:14:06 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:06.006949 | orchestrator | 2025-06-22 20:14:06 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:06.006972 | orchestrator | 2025-06-22 20:14:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:09.046274 | orchestrator | 2025-06-22 20:14:09 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:09.056054 | orchestrator | 2025-06-22 20:14:09 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:09.056135 | orchestrator | 2025-06-22 20:14:09 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:09.056150 | orchestrator | 2025-06-22 20:14:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:12.092099 | orchestrator | 2025-06-22 20:14:12 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:12.094080 | orchestrator | 2025-06-22 20:14:12 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:12.096471 | orchestrator | 2025-06-22 20:14:12 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:12.097103 | orchestrator | 2025-06-22 20:14:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:15.137500 | orchestrator | 2025-06-22 20:14:15 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:15.139580 | orchestrator | 2025-06-22 20:14:15 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:15.141538 | orchestrator | 2025-06-22 20:14:15 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:15.141951 | orchestrator | 2025-06-22 20:14:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:18.190330 | orchestrator | 2025-06-22 20:14:18 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:18.192493 | orchestrator | 2025-06-22 20:14:18 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:18.194575 | orchestrator | 2025-06-22 20:14:18 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:18.194620 | orchestrator | 2025-06-22 20:14:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:21.242007 | orchestrator | 2025-06-22 20:14:21 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:21.243483 | orchestrator | 2025-06-22 20:14:21 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:21.245599 | orchestrator | 2025-06-22 20:14:21 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:21.245642 | orchestrator | 2025-06-22 20:14:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:24.281981 | orchestrator | 2025-06-22 20:14:24 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:24.283397 | orchestrator | 2025-06-22 20:14:24 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:24.285830 | orchestrator | 2025-06-22 20:14:24 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:24.286217 | orchestrator | 2025-06-22 20:14:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:27.335812 | orchestrator | 2025-06-22 20:14:27 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:27.337035 | orchestrator | 2025-06-22 20:14:27 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:27.339718 | orchestrator | 2025-06-22 20:14:27 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:27.339753 | orchestrator | 2025-06-22 20:14:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:30.393349 | orchestrator | 2025-06-22 20:14:30 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:30.395591 | orchestrator | 2025-06-22 20:14:30 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:30.397362 | orchestrator | 2025-06-22 20:14:30 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:30.397669 | orchestrator | 2025-06-22 20:14:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:33.448454 | orchestrator | 2025-06-22 20:14:33 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:33.450008 | orchestrator | 2025-06-22 20:14:33 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:33.453196 | orchestrator | 2025-06-22 20:14:33 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:33.453307 | orchestrator | 2025-06-22 20:14:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:36.496176 | orchestrator | 2025-06-22 20:14:36 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:36.497702 | orchestrator | 2025-06-22 20:14:36 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:36.499140 | orchestrator | 2025-06-22 20:14:36 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:36.499283 | orchestrator | 2025-06-22 20:14:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:39.547415 | orchestrator | 2025-06-22 20:14:39 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:39.547504 | orchestrator | 2025-06-22 20:14:39 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:39.549716 | orchestrator | 2025-06-22 20:14:39 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:39.549759 | orchestrator | 2025-06-22 20:14:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:42.600290 | orchestrator | 2025-06-22 20:14:42 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:42.601904 | orchestrator | 2025-06-22 20:14:42 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:42.604352 | orchestrator | 2025-06-22 20:14:42 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:42.604449 | orchestrator | 2025-06-22 20:14:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:45.654642 | orchestrator | 2025-06-22 20:14:45 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:45.656810 | orchestrator | 2025-06-22 20:14:45 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:45.659417 | orchestrator | 2025-06-22 20:14:45 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state STARTED 2025-06-22 20:14:45.659461 | orchestrator | 2025-06-22 20:14:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:48.701110 | orchestrator | 2025-06-22 20:14:48 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:48.702174 | orchestrator | 2025-06-22 20:14:48 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:48.703210 | orchestrator | 2025-06-22 20:14:48 | INFO  | Task 2354e75f-b560-4c71-9d1d-54dcc5d233a9 is in state SUCCESS 2025-06-22 20:14:48.703412 | orchestrator | 2025-06-22 20:14:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:51.749394 | orchestrator | 2025-06-22 20:14:51 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:51.751021 | orchestrator | 2025-06-22 20:14:51 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:51.751069 | orchestrator | 2025-06-22 20:14:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:54.788288 | orchestrator | 2025-06-22 20:14:54 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:54.789458 | orchestrator | 2025-06-22 20:14:54 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:54.789550 | orchestrator | 2025-06-22 20:14:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:57.834758 | orchestrator | 2025-06-22 20:14:57 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:14:57.835879 | orchestrator | 2025-06-22 20:14:57 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:14:57.836767 | orchestrator | 2025-06-22 20:14:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:00.875532 | orchestrator | 2025-06-22 20:15:00 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:00.876311 | orchestrator | 2025-06-22 20:15:00 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:15:00.876526 | orchestrator | 2025-06-22 20:15:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:03.914158 | orchestrator | 2025-06-22 20:15:03 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:03.915226 | orchestrator | 2025-06-22 20:15:03 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:15:03.915274 | orchestrator | 2025-06-22 20:15:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:06.957883 | orchestrator | 2025-06-22 20:15:06 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:06.962932 | orchestrator | 2025-06-22 20:15:06 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:15:06.962980 | orchestrator | 2025-06-22 20:15:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:10.004673 | orchestrator | 2025-06-22 20:15:10 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:10.006316 | orchestrator | 2025-06-22 20:15:10 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:15:10.006911 | orchestrator | 2025-06-22 20:15:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:13.051081 | orchestrator | 2025-06-22 20:15:13 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:13.053321 | orchestrator | 2025-06-22 20:15:13 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:15:13.053397 | orchestrator | 2025-06-22 20:15:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:16.090659 | orchestrator | 2025-06-22 20:15:16 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:16.092629 | orchestrator | 2025-06-22 20:15:16 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:15:16.092660 | orchestrator | 2025-06-22 20:15:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:19.127151 | orchestrator | 2025-06-22 20:15:19 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:19.128465 | orchestrator | 2025-06-22 20:15:19 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state STARTED 2025-06-22 20:15:19.128539 | orchestrator | 2025-06-22 20:15:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:22.174079 | orchestrator | 2025-06-22 20:15:22 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:22.177647 | orchestrator | 2025-06-22 20:15:22 | INFO  | Task 5172556c-9c4e-4f58-9568-318d2a7180f4 is in state SUCCESS 2025-06-22 20:15:22.179334 | orchestrator | 2025-06-22 20:15:22.179374 | orchestrator | 2025-06-22 20:15:22.179387 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:15:22.179400 | orchestrator | 2025-06-22 20:15:22.179411 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:15:22.179449 | orchestrator | Sunday 22 June 2025 20:11:55 +0000 (0:00:00.243) 0:00:00.243 *********** 2025-06-22 20:15:22.179462 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:15:22.179475 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:15:22.179486 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:15:22.179497 | orchestrator | 2025-06-22 20:15:22.179508 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:15:22.179636 | orchestrator | Sunday 22 June 2025 20:11:55 +0000 (0:00:00.326) 0:00:00.569 *********** 2025-06-22 20:15:22.179649 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-22 20:15:22.179660 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-22 20:15:22.179671 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-22 20:15:22.179682 | orchestrator | 2025-06-22 20:15:22.179693 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-22 20:15:22.179704 | orchestrator | 2025-06-22 20:15:22.179715 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-22 20:15:22.179726 | orchestrator | Sunday 22 June 2025 20:11:56 +0000 (0:00:00.606) 0:00:01.176 *********** 2025-06-22 20:15:22.179736 | orchestrator | 2025-06-22 20:15:22.179747 | orchestrator | STILL ALIVE [task 'Waiting for Nova public port to be UP' is running] ********** 2025-06-22 20:15:22.179758 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:15:22.179769 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:15:22.179780 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:15:22.179824 | orchestrator | 2025-06-22 20:15:22.179838 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:15:22.179852 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:15:22.179867 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:15:22.179880 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:15:22.179892 | orchestrator | 2025-06-22 20:15:22.179905 | orchestrator | 2025-06-22 20:15:22.179917 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:15:22.179930 | orchestrator | Sunday 22 June 2025 20:14:46 +0000 (0:02:49.881) 0:02:51.057 *********** 2025-06-22 20:15:22.179942 | orchestrator | =============================================================================== 2025-06-22 20:15:22.179955 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 169.88s 2025-06-22 20:15:22.179967 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2025-06-22 20:15:22.179993 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-22 20:15:22.180006 | orchestrator | 2025-06-22 20:15:22.180018 | orchestrator | 2025-06-22 20:15:22.180030 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:15:22.180042 | orchestrator | 2025-06-22 20:15:22.180054 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:15:22.180066 | orchestrator | Sunday 22 June 2025 20:13:12 +0000 (0:00:00.277) 0:00:00.277 *********** 2025-06-22 20:15:22.180078 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:15:22.180108 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:15:22.180131 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:15:22.180144 | orchestrator | 2025-06-22 20:15:22.180174 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:15:22.180228 | orchestrator | Sunday 22 June 2025 20:13:12 +0000 (0:00:00.386) 0:00:00.663 *********** 2025-06-22 20:15:22.180241 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-22 20:15:22.180252 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-22 20:15:22.180263 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-22 20:15:22.180274 | orchestrator | 2025-06-22 20:15:22.180285 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-22 20:15:22.180308 | orchestrator | 2025-06-22 20:15:22.180319 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-22 20:15:22.180330 | orchestrator | Sunday 22 June 2025 20:13:13 +0000 (0:00:00.792) 0:00:01.456 *********** 2025-06-22 20:15:22.180341 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:15:22.180352 | orchestrator | 2025-06-22 20:15:22.180363 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-22 20:15:22.180374 | orchestrator | Sunday 22 June 2025 20:13:14 +0000 (0:00:01.059) 0:00:02.515 *********** 2025-06-22 20:15:22.180416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.180447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.180460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.180471 | orchestrator | 2025-06-22 20:15:22.180483 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-22 20:15:22.180494 | orchestrator | Sunday 22 June 2025 20:13:15 +0000 (0:00:00.880) 0:00:03.396 *********** 2025-06-22 20:15:22.180505 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-22 20:15:22.180517 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-22 20:15:22.180528 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:15:22.180539 | orchestrator | 2025-06-22 20:15:22.180550 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-22 20:15:22.180561 | orchestrator | Sunday 22 June 2025 20:13:16 +0000 (0:00:00.976) 0:00:04.372 *********** 2025-06-22 20:15:22.180572 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:15:22.180583 | orchestrator | 2025-06-22 20:15:22.180594 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-22 20:15:22.180605 | orchestrator | Sunday 22 June 2025 20:13:17 +0000 (0:00:00.652) 0:00:05.025 *********** 2025-06-22 20:15:22.180622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.180644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.180656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.180667 | orchestrator | 2025-06-22 20:15:22.180685 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-22 20:15:22.180696 | orchestrator | Sunday 22 June 2025 20:13:18 +0000 (0:00:01.419) 0:00:06.444 *********** 2025-06-22 20:15:22.180708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:15:22.180720 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:22.180731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:15:22.180742 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:22.180759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:15:22.180778 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:22.180811 | orchestrator | 2025-06-22 20:15:22.180824 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-22 20:15:22.180835 | orchestrator | Sunday 22 June 2025 20:13:18 +0000 (0:00:00.392) 0:00:06.837 *********** 2025-06-22 20:15:22.180846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:15:22.180858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:15:22.180870 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:22.180880 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:22.180901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:15:22.180913 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:22.180924 | orchestrator | 2025-06-22 20:15:22.180935 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-22 20:15:22.180946 | orchestrator | Sunday 22 June 2025 20:13:19 +0000 (0:00:00.841) 0:00:07.678 *********** 2025-06-22 20:15:22.180957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.180968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.180995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.181007 | orchestrator | 2025-06-22 20:15:22.181017 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-22 20:15:22.181029 | orchestrator | Sunday 22 June 2025 20:13:21 +0000 (0:00:01.290) 0:00:08.968 *********** 2025-06-22 20:15:22.181040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.181051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.181070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.181081 | orchestrator | 2025-06-22 20:15:22.181093 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-22 20:15:22.181104 | orchestrator | Sunday 22 June 2025 20:13:22 +0000 (0:00:01.386) 0:00:10.355 *********** 2025-06-22 20:15:22.181114 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:22.181125 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:22.181136 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:22.181147 | orchestrator | 2025-06-22 20:15:22.181158 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-22 20:15:22.181176 | orchestrator | Sunday 22 June 2025 20:13:22 +0000 (0:00:00.477) 0:00:10.832 *********** 2025-06-22 20:15:22.181187 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-22 20:15:22.181198 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-22 20:15:22.181208 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-22 20:15:22.181219 | orchestrator | 2025-06-22 20:15:22.181230 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-22 20:15:22.181241 | orchestrator | Sunday 22 June 2025 20:13:24 +0000 (0:00:01.362) 0:00:12.195 *********** 2025-06-22 20:15:22.181252 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-22 20:15:22.181263 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-22 20:15:22.181274 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-22 20:15:22.181285 | orchestrator | 2025-06-22 20:15:22.181296 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-22 20:15:22.181311 | orchestrator | Sunday 22 June 2025 20:13:25 +0000 (0:00:01.182) 0:00:13.377 *********** 2025-06-22 20:15:22.181322 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:15:22.181333 | orchestrator | 2025-06-22 20:15:22.181344 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-22 20:15:22.181355 | orchestrator | Sunday 22 June 2025 20:13:26 +0000 (0:00:00.712) 0:00:14.089 *********** 2025-06-22 20:15:22.181366 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-22 20:15:22.181376 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-22 20:15:22.181387 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:15:22.181398 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:15:22.181409 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:15:22.181419 | orchestrator | 2025-06-22 20:15:22.181430 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-22 20:15:22.181441 | orchestrator | Sunday 22 June 2025 20:13:26 +0000 (0:00:00.662) 0:00:14.752 *********** 2025-06-22 20:15:22.181452 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:22.181463 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:22.181473 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:22.181484 | orchestrator | 2025-06-22 20:15:22.181495 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-22 20:15:22.181506 | orchestrator | Sunday 22 June 2025 20:13:27 +0000 (0:00:00.566) 0:00:15.318 *********** 2025-06-22 20:15:22.181517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098232, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0762193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098232, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0762193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1098232, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0762193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098221, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0722194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098221, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0722194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1098221, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0722194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098214, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0702193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098214, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0702193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1098214, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0702193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181657 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098228, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0742192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098228, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0742192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1098228, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0742192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098202, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0652192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181713 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098202, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0652192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1098202, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0652192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181750 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098216, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0712192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098216, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0712192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1098216, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0712192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098226, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0732193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.181823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098226, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0732193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1098226, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0732193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098196, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.063219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098196, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.063219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1098196, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.063219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098181, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.057219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098181, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.057219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1098181, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.057219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098205, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.066219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098205, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.066219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1098205, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.066219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098185, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0592191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098185, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0592191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1098185, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0592191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098223, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0732193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098223, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0732193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1098223, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0732193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098208, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0682192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098208, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0682192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1098208, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0682192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098229, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0752194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098229, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0752194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1098229, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0752194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098189, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.061219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098189, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.061219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1098189, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.061219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182755 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098218, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0722194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098218, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0722194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1098218, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0722194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098182, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.058219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098182, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.058219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1098182, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.058219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098186, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0592191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098186, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0592191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1098186, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0592191, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098212, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.069219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098212, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.069219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.182989 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1098212, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.069219, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098278, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1082199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098278, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1082199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1098278, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1082199, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098267, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0952196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098267, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0952196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1098267, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0952196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098236, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0772192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098236, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0772192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1098236, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0772192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098307, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098307, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1098307, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11522, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098239, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0782194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098239, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0782194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1098239, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0782194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098303, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098303, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1098303, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11222, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098310, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1202202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098310, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1202202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1098310, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1202202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098297, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.10922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098297, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.10922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1098297, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.10922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098300, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098300, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1098300, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098240, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0802195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098240, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0802195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1098240, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0802195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098271, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0962198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098271, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0962198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1098271, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0962198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098313, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.12122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098313, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.12122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1098313, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.12122, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098305, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098305, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1098305, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11322, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098245, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0832195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098245, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0832195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1098245, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0832195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098243, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0802195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098243, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0802195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1098243, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0802195, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098254, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0852196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098254, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0852196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1098254, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0852196, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098256, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0932198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098256, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0932198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1098256, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0932198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098274, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0962198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098274, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0962198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1098274, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0962198, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098298, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098298, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1098298, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.11022, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098276, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0972197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098276, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0972197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1098276, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.0972197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098315, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1232202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098315, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1232202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1098315, 'dev': 148, 'nlink': 1, 'atime': 1750550516.0, 'mtime': 1750550516.0, 'ctime': 1750620402.1232202, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:22.183969 | orchestrator | 2025-06-22 20:15:22.183980 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-22 20:15:22.183991 | orchestrator | Sunday 22 June 2025 20:14:05 +0000 (0:00:37.659) 0:00:52.977 *********** 2025-06-22 20:15:22.184007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.184019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.184036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:22.184048 | orchestrator | 2025-06-22 20:15:22.184059 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-22 20:15:22.184070 | orchestrator | Sunday 22 June 2025 20:14:06 +0000 (0:00:01.160) 0:00:54.138 *********** 2025-06-22 20:15:22.184081 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:15:22.184092 | orchestrator | 2025-06-22 20:15:22.184103 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-22 20:15:22.184114 | orchestrator | Sunday 22 June 2025 20:14:08 +0000 (0:00:02.232) 0:00:56.371 *********** 2025-06-22 20:15:22.184125 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:15:22.184136 | orchestrator | 2025-06-22 20:15:22.184147 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-22 20:15:22.184158 | orchestrator | Sunday 22 June 2025 20:14:10 +0000 (0:00:02.439) 0:00:58.810 *********** 2025-06-22 20:15:22.184169 | orchestrator | 2025-06-22 20:15:22.184180 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-22 20:15:22.184196 | orchestrator | Sunday 22 June 2025 20:14:11 +0000 (0:00:00.230) 0:00:59.040 *********** 2025-06-22 20:15:22.184207 | orchestrator | 2025-06-22 20:15:22.184217 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-22 20:15:22.184228 | orchestrator | Sunday 22 June 2025 20:14:11 +0000 (0:00:00.064) 0:00:59.105 *********** 2025-06-22 20:15:22.184239 | orchestrator | 2025-06-22 20:15:22.184250 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-22 20:15:22.184261 | orchestrator | Sunday 22 June 2025 20:14:11 +0000 (0:00:00.065) 0:00:59.170 *********** 2025-06-22 20:15:22.184272 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:22.184282 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:22.184293 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:15:22.184304 | orchestrator | 2025-06-22 20:15:22.184315 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-22 20:15:22.184326 | orchestrator | Sunday 22 June 2025 20:14:13 +0000 (0:00:01.849) 0:01:01.020 *********** 2025-06-22 20:15:22.184337 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:22.184347 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:22.184358 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-22 20:15:22.184369 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-22 20:15:22.184380 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-22 20:15:22.184391 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:15:22.184402 | orchestrator | 2025-06-22 20:15:22.184413 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-22 20:15:22.184424 | orchestrator | Sunday 22 June 2025 20:14:51 +0000 (0:00:38.266) 0:01:39.286 *********** 2025-06-22 20:15:22.184435 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:22.184445 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:15:22.184456 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:15:22.184467 | orchestrator | 2025-06-22 20:15:22.184484 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-22 20:15:22.184495 | orchestrator | Sunday 22 June 2025 20:15:16 +0000 (0:00:24.847) 0:02:04.134 *********** 2025-06-22 20:15:22.184506 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:15:22.184517 | orchestrator | 2025-06-22 20:15:22.184528 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-22 20:15:22.184539 | orchestrator | Sunday 22 June 2025 20:15:18 +0000 (0:00:02.435) 0:02:06.569 *********** 2025-06-22 20:15:22.184550 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:22.184560 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:22.184571 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:22.184582 | orchestrator | 2025-06-22 20:15:22.184593 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-22 20:15:22.184604 | orchestrator | Sunday 22 June 2025 20:15:18 +0000 (0:00:00.308) 0:02:06.878 *********** 2025-06-22 20:15:22.184622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-22 20:15:22.184635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-22 20:15:22.184648 | orchestrator | 2025-06-22 20:15:22.184659 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-22 20:15:22.184676 | orchestrator | Sunday 22 June 2025 20:15:21 +0000 (0:00:02.530) 0:02:09.408 *********** 2025-06-22 20:15:22.184693 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:22.184722 | orchestrator | 2025-06-22 20:15:22.184743 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:15:22.184759 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:15:22.184777 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:15:22.184861 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:15:22.184880 | orchestrator | 2025-06-22 20:15:22.184899 | orchestrator | 2025-06-22 20:15:22.184916 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:15:22.184933 | orchestrator | Sunday 22 June 2025 20:15:21 +0000 (0:00:00.251) 0:02:09.660 *********** 2025-06-22 20:15:22.184952 | orchestrator | =============================================================================== 2025-06-22 20:15:22.184971 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.27s 2025-06-22 20:15:22.184982 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.66s 2025-06-22 20:15:22.184993 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 24.85s 2025-06-22 20:15:22.185003 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.53s 2025-06-22 20:15:22.185014 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.44s 2025-06-22 20:15:22.185032 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.44s 2025-06-22 20:15:22.185043 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.23s 2025-06-22 20:15:22.185054 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.85s 2025-06-22 20:15:22.185065 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.42s 2025-06-22 20:15:22.185085 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.39s 2025-06-22 20:15:22.185096 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.36s 2025-06-22 20:15:22.185106 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.29s 2025-06-22 20:15:22.185116 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.18s 2025-06-22 20:15:22.185125 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.16s 2025-06-22 20:15:22.185135 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.06s 2025-06-22 20:15:22.185144 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.98s 2025-06-22 20:15:22.185154 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.88s 2025-06-22 20:15:22.185163 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.84s 2025-06-22 20:15:22.185173 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-06-22 20:15:22.185182 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.71s 2025-06-22 20:15:22.185192 | orchestrator | 2025-06-22 20:15:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:25.214995 | orchestrator | 2025-06-22 20:15:25 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:25.215110 | orchestrator | 2025-06-22 20:15:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:28.263338 | orchestrator | 2025-06-22 20:15:28 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:28.263441 | orchestrator | 2025-06-22 20:15:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:31.298716 | orchestrator | 2025-06-22 20:15:31 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:31.298922 | orchestrator | 2025-06-22 20:15:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:34.337464 | orchestrator | 2025-06-22 20:15:34 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:34.337548 | orchestrator | 2025-06-22 20:15:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:37.372091 | orchestrator | 2025-06-22 20:15:37 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:37.372183 | orchestrator | 2025-06-22 20:15:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:40.415112 | orchestrator | 2025-06-22 20:15:40 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:40.415217 | orchestrator | 2025-06-22 20:15:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:43.452209 | orchestrator | 2025-06-22 20:15:43 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:43.452310 | orchestrator | 2025-06-22 20:15:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:46.484201 | orchestrator | 2025-06-22 20:15:46 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:46.484302 | orchestrator | 2025-06-22 20:15:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:49.527915 | orchestrator | 2025-06-22 20:15:49 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:49.528021 | orchestrator | 2025-06-22 20:15:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:52.557714 | orchestrator | 2025-06-22 20:15:52 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:52.557874 | orchestrator | 2025-06-22 20:15:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:55.614271 | orchestrator | 2025-06-22 20:15:55 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:55.614422 | orchestrator | 2025-06-22 20:15:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:58.660436 | orchestrator | 2025-06-22 20:15:58 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:15:58.660533 | orchestrator | 2025-06-22 20:15:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:01.708458 | orchestrator | 2025-06-22 20:16:01 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:01.708589 | orchestrator | 2025-06-22 20:16:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:04.753219 | orchestrator | 2025-06-22 20:16:04 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:04.753316 | orchestrator | 2025-06-22 20:16:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:07.787287 | orchestrator | 2025-06-22 20:16:07 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:07.787387 | orchestrator | 2025-06-22 20:16:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:10.833498 | orchestrator | 2025-06-22 20:16:10 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:10.833589 | orchestrator | 2025-06-22 20:16:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:13.874244 | orchestrator | 2025-06-22 20:16:13 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:13.874341 | orchestrator | 2025-06-22 20:16:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:16.916876 | orchestrator | 2025-06-22 20:16:16 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:16.916994 | orchestrator | 2025-06-22 20:16:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:19.964661 | orchestrator | 2025-06-22 20:16:19 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:19.964786 | orchestrator | 2025-06-22 20:16:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:23.018325 | orchestrator | 2025-06-22 20:16:23 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:23.018419 | orchestrator | 2025-06-22 20:16:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:26.060327 | orchestrator | 2025-06-22 20:16:26 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:26.060426 | orchestrator | 2025-06-22 20:16:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:29.095908 | orchestrator | 2025-06-22 20:16:29 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:29.096013 | orchestrator | 2025-06-22 20:16:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:32.157063 | orchestrator | 2025-06-22 20:16:32 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:32.157155 | orchestrator | 2025-06-22 20:16:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:35.198289 | orchestrator | 2025-06-22 20:16:35 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:35.198361 | orchestrator | 2025-06-22 20:16:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:38.240526 | orchestrator | 2025-06-22 20:16:38 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:38.240631 | orchestrator | 2025-06-22 20:16:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:41.271868 | orchestrator | 2025-06-22 20:16:41 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:41.271983 | orchestrator | 2025-06-22 20:16:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:44.313819 | orchestrator | 2025-06-22 20:16:44 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:44.313912 | orchestrator | 2025-06-22 20:16:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:47.361465 | orchestrator | 2025-06-22 20:16:47 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:47.361550 | orchestrator | 2025-06-22 20:16:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:50.400180 | orchestrator | 2025-06-22 20:16:50 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:50.400281 | orchestrator | 2025-06-22 20:16:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:53.437950 | orchestrator | 2025-06-22 20:16:53 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:53.438101 | orchestrator | 2025-06-22 20:16:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:56.473997 | orchestrator | 2025-06-22 20:16:56 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:56.474169 | orchestrator | 2025-06-22 20:16:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:59.533228 | orchestrator | 2025-06-22 20:16:59 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:16:59.533326 | orchestrator | 2025-06-22 20:16:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:02.576491 | orchestrator | 2025-06-22 20:17:02 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:02.576592 | orchestrator | 2025-06-22 20:17:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:05.620566 | orchestrator | 2025-06-22 20:17:05 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:05.620677 | orchestrator | 2025-06-22 20:17:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:08.664493 | orchestrator | 2025-06-22 20:17:08 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:08.664594 | orchestrator | 2025-06-22 20:17:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:11.703835 | orchestrator | 2025-06-22 20:17:11 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:11.703929 | orchestrator | 2025-06-22 20:17:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:14.748284 | orchestrator | 2025-06-22 20:17:14 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:14.748378 | orchestrator | 2025-06-22 20:17:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:17.792853 | orchestrator | 2025-06-22 20:17:17 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:17.792950 | orchestrator | 2025-06-22 20:17:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:20.839997 | orchestrator | 2025-06-22 20:17:20 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:20.840094 | orchestrator | 2025-06-22 20:17:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:23.884408 | orchestrator | 2025-06-22 20:17:23 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:23.884508 | orchestrator | 2025-06-22 20:17:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:26.934301 | orchestrator | 2025-06-22 20:17:26 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:26.934401 | orchestrator | 2025-06-22 20:17:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:29.975936 | orchestrator | 2025-06-22 20:17:29 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:29.976042 | orchestrator | 2025-06-22 20:17:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:33.016531 | orchestrator | 2025-06-22 20:17:33 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:33.016691 | orchestrator | 2025-06-22 20:17:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:36.061535 | orchestrator | 2025-06-22 20:17:36 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:36.061684 | orchestrator | 2025-06-22 20:17:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:39.099835 | orchestrator | 2025-06-22 20:17:39 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:39.099908 | orchestrator | 2025-06-22 20:17:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:42.143053 | orchestrator | 2025-06-22 20:17:42 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:42.143154 | orchestrator | 2025-06-22 20:17:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:45.194714 | orchestrator | 2025-06-22 20:17:45 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:45.194843 | orchestrator | 2025-06-22 20:17:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:48.250753 | orchestrator | 2025-06-22 20:17:48 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:48.251118 | orchestrator | 2025-06-22 20:17:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:51.297076 | orchestrator | 2025-06-22 20:17:51 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:51.297159 | orchestrator | 2025-06-22 20:17:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:54.363996 | orchestrator | 2025-06-22 20:17:54 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:54.370368 | orchestrator | 2025-06-22 20:17:54 | INFO  | Task a30583c9-3bb0-4fd1-af63-e9296cc176c6 is in state STARTED 2025-06-22 20:17:54.370465 | orchestrator | 2025-06-22 20:17:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:57.423060 | orchestrator | 2025-06-22 20:17:57 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:17:57.424755 | orchestrator | 2025-06-22 20:17:57 | INFO  | Task a30583c9-3bb0-4fd1-af63-e9296cc176c6 is in state STARTED 2025-06-22 20:17:57.424787 | orchestrator | 2025-06-22 20:17:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:00.469159 | orchestrator | 2025-06-22 20:18:00 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:00.471395 | orchestrator | 2025-06-22 20:18:00 | INFO  | Task a30583c9-3bb0-4fd1-af63-e9296cc176c6 is in state STARTED 2025-06-22 20:18:00.471471 | orchestrator | 2025-06-22 20:18:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:03.522974 | orchestrator | 2025-06-22 20:18:03 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:03.527113 | orchestrator | 2025-06-22 20:18:03 | INFO  | Task a30583c9-3bb0-4fd1-af63-e9296cc176c6 is in state STARTED 2025-06-22 20:18:03.527171 | orchestrator | 2025-06-22 20:18:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:06.575056 | orchestrator | 2025-06-22 20:18:06 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:06.576993 | orchestrator | 2025-06-22 20:18:06 | INFO  | Task a30583c9-3bb0-4fd1-af63-e9296cc176c6 is in state STARTED 2025-06-22 20:18:06.577025 | orchestrator | 2025-06-22 20:18:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:09.630151 | orchestrator | 2025-06-22 20:18:09 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:09.632017 | orchestrator | 2025-06-22 20:18:09 | INFO  | Task a30583c9-3bb0-4fd1-af63-e9296cc176c6 is in state STARTED 2025-06-22 20:18:09.632389 | orchestrator | 2025-06-22 20:18:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:12.676190 | orchestrator | 2025-06-22 20:18:12 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:12.677153 | orchestrator | 2025-06-22 20:18:12 | INFO  | Task a30583c9-3bb0-4fd1-af63-e9296cc176c6 is in state SUCCESS 2025-06-22 20:18:12.677181 | orchestrator | 2025-06-22 20:18:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:15.709986 | orchestrator | 2025-06-22 20:18:15 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:15.710170 | orchestrator | 2025-06-22 20:18:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:18.739317 | orchestrator | 2025-06-22 20:18:18 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:18.739412 | orchestrator | 2025-06-22 20:18:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:21.787142 | orchestrator | 2025-06-22 20:18:21 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:21.787222 | orchestrator | 2025-06-22 20:18:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:24.828337 | orchestrator | 2025-06-22 20:18:24 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:24.828422 | orchestrator | 2025-06-22 20:18:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:27.870643 | orchestrator | 2025-06-22 20:18:27 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:27.870732 | orchestrator | 2025-06-22 20:18:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:30.911972 | orchestrator | 2025-06-22 20:18:30 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:30.912073 | orchestrator | 2025-06-22 20:18:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:33.955679 | orchestrator | 2025-06-22 20:18:33 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:33.955787 | orchestrator | 2025-06-22 20:18:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:37.000474 | orchestrator | 2025-06-22 20:18:36 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:37.000627 | orchestrator | 2025-06-22 20:18:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:40.044265 | orchestrator | 2025-06-22 20:18:40 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:40.045639 | orchestrator | 2025-06-22 20:18:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:43.065030 | orchestrator | 2025-06-22 20:18:43 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:43.065117 | orchestrator | 2025-06-22 20:18:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:46.096947 | orchestrator | 2025-06-22 20:18:46 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:46.097072 | orchestrator | 2025-06-22 20:18:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:49.124401 | orchestrator | 2025-06-22 20:18:49 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:49.124520 | orchestrator | 2025-06-22 20:18:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:52.165241 | orchestrator | 2025-06-22 20:18:52 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:52.165324 | orchestrator | 2025-06-22 20:18:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:55.209016 | orchestrator | 2025-06-22 20:18:55 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:55.209106 | orchestrator | 2025-06-22 20:18:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:58.251703 | orchestrator | 2025-06-22 20:18:58 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:18:58.251784 | orchestrator | 2025-06-22 20:18:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:19:01.295776 | orchestrator | 2025-06-22 20:19:01 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:19:01.295878 | orchestrator | 2025-06-22 20:19:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:19:04.343231 | orchestrator | 2025-06-22 20:19:04 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:19:04.343328 | orchestrator | 2025-06-22 20:19:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:19:07.384434 | orchestrator | 2025-06-22 20:19:07 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:19:07.384610 | orchestrator | 2025-06-22 20:19:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:19:10.426841 | orchestrator | 2025-06-22 20:19:10 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:19:10.426938 | orchestrator | 2025-06-22 20:19:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:19:13.478191 | orchestrator | 2025-06-22 20:19:13 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:19:13.478290 | orchestrator | 2025-06-22 20:19:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:19:16.534005 | orchestrator | 2025-06-22 20:19:16 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state STARTED 2025-06-22 20:19:16.534154 | orchestrator | 2025-06-22 20:19:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:19:19.576561 | orchestrator | 2025-06-22 20:19:19 | INFO  | Task e162b4c0-e361-402b-8793-451903ba6dc7 is in state SUCCESS 2025-06-22 20:19:19.577972 | orchestrator | 2025-06-22 20:19:19.578267 | orchestrator | None 2025-06-22 20:19:19.578304 | orchestrator | 2025-06-22 20:19:19.578323 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:19:19.578342 | orchestrator | 2025-06-22 20:19:19.578361 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-22 20:19:19.578378 | orchestrator | Sunday 22 June 2025 20:10:48 +0000 (0:00:00.211) 0:00:00.211 *********** 2025-06-22 20:19:19.578395 | orchestrator | changed: [testbed-manager] 2025-06-22 20:19:19.578409 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.578422 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:19.578433 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:19.578475 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:19.578511 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:19.578523 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:19.578534 | orchestrator | 2025-06-22 20:19:19.578546 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:19:19.578610 | orchestrator | Sunday 22 June 2025 20:10:48 +0000 (0:00:00.687) 0:00:00.899 *********** 2025-06-22 20:19:19.578622 | orchestrator | changed: [testbed-manager] 2025-06-22 20:19:19.578633 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.578643 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:19.578655 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:19.578665 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:19.578676 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:19.578687 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:19.578697 | orchestrator | 2025-06-22 20:19:19.578709 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:19:19.578722 | orchestrator | Sunday 22 June 2025 20:10:49 +0000 (0:00:00.583) 0:00:01.483 *********** 2025-06-22 20:19:19.578739 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-22 20:19:19.578755 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-22 20:19:19.578805 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-22 20:19:19.578845 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-22 20:19:19.578923 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-22 20:19:19.579037 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-22 20:19:19.579049 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-22 20:19:19.579059 | orchestrator | 2025-06-22 20:19:19.579068 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-22 20:19:19.579078 | orchestrator | 2025-06-22 20:19:19.579102 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-22 20:19:19.579112 | orchestrator | Sunday 22 June 2025 20:10:50 +0000 (0:00:01.423) 0:00:02.906 *********** 2025-06-22 20:19:19.579122 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:19.579132 | orchestrator | 2025-06-22 20:19:19.579141 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-22 20:19:19.579151 | orchestrator | Sunday 22 June 2025 20:10:51 +0000 (0:00:00.996) 0:00:03.903 *********** 2025-06-22 20:19:19.579162 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-22 20:19:19.579172 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-22 20:19:19.579181 | orchestrator | 2025-06-22 20:19:19.579191 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-22 20:19:19.579200 | orchestrator | Sunday 22 June 2025 20:10:56 +0000 (0:00:04.402) 0:00:08.306 *********** 2025-06-22 20:19:19.579210 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:19:19.579220 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:19:19.579229 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.579239 | orchestrator | 2025-06-22 20:19:19.579249 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-22 20:19:19.579301 | orchestrator | Sunday 22 June 2025 20:11:00 +0000 (0:00:04.351) 0:00:12.657 *********** 2025-06-22 20:19:19.579313 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.579323 | orchestrator | 2025-06-22 20:19:19.579333 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-22 20:19:19.579342 | orchestrator | Sunday 22 June 2025 20:11:01 +0000 (0:00:00.689) 0:00:13.346 *********** 2025-06-22 20:19:19.579420 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.579430 | orchestrator | 2025-06-22 20:19:19.579468 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-22 20:19:19.579479 | orchestrator | Sunday 22 June 2025 20:11:03 +0000 (0:00:01.996) 0:00:15.343 *********** 2025-06-22 20:19:19.579489 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.579499 | orchestrator | 2025-06-22 20:19:19.579508 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 20:19:19.579518 | orchestrator | Sunday 22 June 2025 20:11:08 +0000 (0:00:04.802) 0:00:20.146 *********** 2025-06-22 20:19:19.579541 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.579551 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.579560 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.579570 | orchestrator | 2025-06-22 20:19:19.579580 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-22 20:19:19.579589 | orchestrator | Sunday 22 June 2025 20:11:08 +0000 (0:00:00.353) 0:00:20.500 *********** 2025-06-22 20:19:19.579645 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:19.579656 | orchestrator | 2025-06-22 20:19:19.579666 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-22 20:19:19.579675 | orchestrator | Sunday 22 June 2025 20:11:41 +0000 (0:00:33.090) 0:00:53.590 *********** 2025-06-22 20:19:19.579685 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.579694 | orchestrator | 2025-06-22 20:19:19.579759 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-22 20:19:19.579770 | orchestrator | Sunday 22 June 2025 20:11:56 +0000 (0:00:14.580) 0:01:08.171 *********** 2025-06-22 20:19:19.579780 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:19.579789 | orchestrator | 2025-06-22 20:19:19.579799 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-22 20:19:19.579808 | orchestrator | Sunday 22 June 2025 20:12:08 +0000 (0:00:11.820) 0:01:19.991 *********** 2025-06-22 20:19:19.579860 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:19.579872 | orchestrator | 2025-06-22 20:19:19.579882 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-22 20:19:19.579891 | orchestrator | Sunday 22 June 2025 20:12:08 +0000 (0:00:00.898) 0:01:20.889 *********** 2025-06-22 20:19:19.579901 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.579911 | orchestrator | 2025-06-22 20:19:19.579920 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 20:19:19.579951 | orchestrator | Sunday 22 June 2025 20:12:09 +0000 (0:00:00.416) 0:01:21.306 *********** 2025-06-22 20:19:19.579963 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:19.579973 | orchestrator | 2025-06-22 20:19:19.579983 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-22 20:19:19.579993 | orchestrator | Sunday 22 June 2025 20:12:09 +0000 (0:00:00.441) 0:01:21.747 *********** 2025-06-22 20:19:19.580002 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:19.580037 | orchestrator | 2025-06-22 20:19:19.580048 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-22 20:19:19.580058 | orchestrator | Sunday 22 June 2025 20:12:26 +0000 (0:00:16.843) 0:01:38.591 *********** 2025-06-22 20:19:19.580068 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.580077 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580087 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.580096 | orchestrator | 2025-06-22 20:19:19.580106 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-22 20:19:19.580116 | orchestrator | 2025-06-22 20:19:19.580125 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-22 20:19:19.580135 | orchestrator | Sunday 22 June 2025 20:12:26 +0000 (0:00:00.301) 0:01:38.893 *********** 2025-06-22 20:19:19.580144 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:19.580154 | orchestrator | 2025-06-22 20:19:19.580163 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-22 20:19:19.580173 | orchestrator | Sunday 22 June 2025 20:12:27 +0000 (0:00:00.582) 0:01:39.476 *********** 2025-06-22 20:19:19.580183 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580192 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.580225 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.580235 | orchestrator | 2025-06-22 20:19:19.580250 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-22 20:19:19.580268 | orchestrator | Sunday 22 June 2025 20:12:29 +0000 (0:00:02.090) 0:01:41.567 *********** 2025-06-22 20:19:19.580278 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580288 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.580297 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.580307 | orchestrator | 2025-06-22 20:19:19.580316 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-22 20:19:19.580326 | orchestrator | Sunday 22 June 2025 20:12:31 +0000 (0:00:02.046) 0:01:43.613 *********** 2025-06-22 20:19:19.580336 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.580345 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580355 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.580364 | orchestrator | 2025-06-22 20:19:19.580374 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-22 20:19:19.580384 | orchestrator | Sunday 22 June 2025 20:12:31 +0000 (0:00:00.324) 0:01:43.938 *********** 2025-06-22 20:19:19.580393 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 20:19:19.580403 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580413 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 20:19:19.580422 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.580432 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-22 20:19:19.580493 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-22 20:19:19.580503 | orchestrator | 2025-06-22 20:19:19.580513 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-22 20:19:19.580523 | orchestrator | Sunday 22 June 2025 20:12:40 +0000 (0:00:08.572) 0:01:52.510 *********** 2025-06-22 20:19:19.580532 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.580542 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580551 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.580561 | orchestrator | 2025-06-22 20:19:19.580571 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-22 20:19:19.580581 | orchestrator | Sunday 22 June 2025 20:12:40 +0000 (0:00:00.355) 0:01:52.866 *********** 2025-06-22 20:19:19.580590 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-22 20:19:19.580600 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.580610 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 20:19:19.580619 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580629 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 20:19:19.580639 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.580648 | orchestrator | 2025-06-22 20:19:19.580658 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-22 20:19:19.580667 | orchestrator | Sunday 22 June 2025 20:12:41 +0000 (0:00:00.768) 0:01:53.634 *********** 2025-06-22 20:19:19.580677 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.580687 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580696 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.580706 | orchestrator | 2025-06-22 20:19:19.580716 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-22 20:19:19.580725 | orchestrator | Sunday 22 June 2025 20:12:42 +0000 (0:00:00.534) 0:01:54.169 *********** 2025-06-22 20:19:19.580736 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580745 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.580755 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.580764 | orchestrator | 2025-06-22 20:19:19.580773 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-22 20:19:19.580783 | orchestrator | Sunday 22 June 2025 20:12:43 +0000 (0:00:00.883) 0:01:55.052 *********** 2025-06-22 20:19:19.580793 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580810 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.580821 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.580830 | orchestrator | 2025-06-22 20:19:19.580840 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-22 20:19:19.580856 | orchestrator | Sunday 22 June 2025 20:12:45 +0000 (0:00:02.010) 0:01:57.063 *********** 2025-06-22 20:19:19.580866 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580876 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.580885 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:19.580895 | orchestrator | 2025-06-22 20:19:19.580905 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-22 20:19:19.580915 | orchestrator | Sunday 22 June 2025 20:13:02 +0000 (0:00:17.797) 0:02:14.861 *********** 2025-06-22 20:19:19.580924 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580934 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.580944 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:19.580953 | orchestrator | 2025-06-22 20:19:19.580963 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-22 20:19:19.580973 | orchestrator | Sunday 22 June 2025 20:13:13 +0000 (0:00:10.284) 0:02:25.146 *********** 2025-06-22 20:19:19.580982 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.580992 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:19.581002 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.581011 | orchestrator | 2025-06-22 20:19:19.581021 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-22 20:19:19.581031 | orchestrator | Sunday 22 June 2025 20:13:14 +0000 (0:00:01.693) 0:02:26.840 *********** 2025-06-22 20:19:19.581040 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.581050 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.581060 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.581069 | orchestrator | 2025-06-22 20:19:19.581079 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-22 20:19:19.581089 | orchestrator | Sunday 22 June 2025 20:13:26 +0000 (0:00:11.726) 0:02:38.566 *********** 2025-06-22 20:19:19.581099 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.581108 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.581118 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.581127 | orchestrator | 2025-06-22 20:19:19.581137 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-22 20:19:19.581147 | orchestrator | Sunday 22 June 2025 20:13:28 +0000 (0:00:01.475) 0:02:40.042 *********** 2025-06-22 20:19:19.581156 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.581178 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.581188 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.581198 | orchestrator | 2025-06-22 20:19:19.581208 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-22 20:19:19.581217 | orchestrator | 2025-06-22 20:19:19.581227 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 20:19:19.581236 | orchestrator | Sunday 22 June 2025 20:13:28 +0000 (0:00:00.324) 0:02:40.367 *********** 2025-06-22 20:19:19.581246 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:19.581257 | orchestrator | 2025-06-22 20:19:19.581267 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-22 20:19:19.581277 | orchestrator | Sunday 22 June 2025 20:13:28 +0000 (0:00:00.542) 0:02:40.909 *********** 2025-06-22 20:19:19.581287 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-22 20:19:19.581296 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-22 20:19:19.581306 | orchestrator | 2025-06-22 20:19:19.581316 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-22 20:19:19.581326 | orchestrator | Sunday 22 June 2025 20:13:32 +0000 (0:00:03.416) 0:02:44.326 *********** 2025-06-22 20:19:19.581336 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-22 20:19:19.581347 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-22 20:19:19.581363 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-22 20:19:19.581373 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-22 20:19:19.581383 | orchestrator | 2025-06-22 20:19:19.581393 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-22 20:19:19.581402 | orchestrator | Sunday 22 June 2025 20:13:39 +0000 (0:00:06.942) 0:02:51.269 *********** 2025-06-22 20:19:19.581412 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:19:19.581429 | orchestrator | 2025-06-22 20:19:19.581477 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-22 20:19:19.581498 | orchestrator | Sunday 22 June 2025 20:13:42 +0000 (0:00:03.403) 0:02:54.672 *********** 2025-06-22 20:19:19.581514 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:19:19.581529 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-22 20:19:19.581545 | orchestrator | 2025-06-22 20:19:19.581559 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-22 20:19:19.581573 | orchestrator | Sunday 22 June 2025 20:13:46 +0000 (0:00:03.923) 0:02:58.596 *********** 2025-06-22 20:19:19.581589 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:19:19.581605 | orchestrator | 2025-06-22 20:19:19.581620 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-22 20:19:19.581637 | orchestrator | Sunday 22 June 2025 20:13:50 +0000 (0:00:03.412) 0:03:02.009 *********** 2025-06-22 20:19:19.581654 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-22 20:19:19.581669 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-22 20:19:19.581686 | orchestrator | 2025-06-22 20:19:19.581696 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-22 20:19:19.581714 | orchestrator | Sunday 22 June 2025 20:13:58 +0000 (0:00:08.101) 0:03:10.110 *********** 2025-06-22 20:19:19.581729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.581751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.581773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.581792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.581804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.581814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.581825 | orchestrator | 2025-06-22 20:19:19.581835 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-22 20:19:19.581845 | orchestrator | Sunday 22 June 2025 20:13:59 +0000 (0:00:01.318) 0:03:11.428 *********** 2025-06-22 20:19:19.581854 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.581864 | orchestrator | 2025-06-22 20:19:19.581878 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-22 20:19:19.581888 | orchestrator | Sunday 22 June 2025 20:13:59 +0000 (0:00:00.117) 0:03:11.546 *********** 2025-06-22 20:19:19.581903 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.581913 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.581923 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.581932 | orchestrator | 2025-06-22 20:19:19.581942 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-22 20:19:19.581952 | orchestrator | Sunday 22 June 2025 20:14:00 +0000 (0:00:00.516) 0:03:12.062 *********** 2025-06-22 20:19:19.581962 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:19:19.581972 | orchestrator | 2025-06-22 20:19:19.581982 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-22 20:19:19.581992 | orchestrator | Sunday 22 June 2025 20:14:00 +0000 (0:00:00.690) 0:03:12.752 *********** 2025-06-22 20:19:19.582001 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.582011 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.582067 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.582078 | orchestrator | 2025-06-22 20:19:19.582088 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 20:19:19.582097 | orchestrator | Sunday 22 June 2025 20:14:01 +0000 (0:00:00.309) 0:03:13.061 *********** 2025-06-22 20:19:19.582107 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:19.582117 | orchestrator | 2025-06-22 20:19:19.582127 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-22 20:19:19.582137 | orchestrator | Sunday 22 June 2025 20:14:01 +0000 (0:00:00.718) 0:03:13.780 *********** 2025-06-22 20:19:19.582148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.582168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.582191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.582202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.582213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.582231 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.582242 | orchestrator | 2025-06-22 20:19:19.582252 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-22 20:19:19.582262 | orchestrator | Sunday 22 June 2025 20:14:04 +0000 (0:00:02.379) 0:03:16.159 *********** 2025-06-22 20:19:19.582273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:19.582299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.582310 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.582321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:19.582332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.582342 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.582360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:19.582381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.582391 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.582401 | orchestrator | 2025-06-22 20:19:19.582411 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-22 20:19:19.582421 | orchestrator | Sunday 22 June 2025 20:14:04 +0000 (0:00:00.646) 0:03:16.806 *********** 2025-06-22 20:19:19.582431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:19.582472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.583205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:19.583315 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.583343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.583363 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.583400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:19.583421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.583478 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.583491 | orchestrator | 2025-06-22 20:19:19.583503 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-22 20:19:19.583515 | orchestrator | Sunday 22 June 2025 20:14:05 +0000 (0:00:01.148) 0:03:17.955 *********** 2025-06-22 20:19:19.583546 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.583576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.583590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.583603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.583632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.583654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.583666 | orchestrator | 2025-06-22 20:19:19.583678 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-22 20:19:19.583689 | orchestrator | Sunday 22 June 2025 20:14:08 +0000 (0:00:02.493) 0:03:20.448 *********** 2025-06-22 20:19:19.583706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.583719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.583741 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.583762 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.583775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.583794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.583813 | orchestrator | 2025-06-22 20:19:19.583832 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-22 20:19:19.583852 | orchestrator | Sunday 22 June 2025 20:14:14 +0000 (0:00:05.631) 0:03:26.080 *********** 2025-06-22 20:19:19.583874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:19.583897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.583918 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.584040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:19.584070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.584088 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.584108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:19.584129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.584160 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.584179 | orchestrator | 2025-06-22 20:19:19.584194 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-22 20:19:19.584213 | orchestrator | Sunday 22 June 2025 20:14:14 +0000 (0:00:00.634) 0:03:26.715 *********** 2025-06-22 20:19:19.584231 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:19.584248 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.584266 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:19.584284 | orchestrator | 2025-06-22 20:19:19.584315 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-22 20:19:19.584335 | orchestrator | Sunday 22 June 2025 20:14:16 +0000 (0:00:02.053) 0:03:28.768 *********** 2025-06-22 20:19:19.584354 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.584372 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.584390 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.584411 | orchestrator | 2025-06-22 20:19:19.584429 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-22 20:19:19.584475 | orchestrator | Sunday 22 June 2025 20:14:17 +0000 (0:00:00.361) 0:03:29.130 *********** 2025-06-22 20:19:19.584502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.584516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.584539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:19.584561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.584573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.584590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.584601 | orchestrator | 2025-06-22 20:19:19.584612 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-22 20:19:19.584624 | orchestrator | Sunday 22 June 2025 20:14:18 +0000 (0:00:01.839) 0:03:30.970 *********** 2025-06-22 20:19:19.584635 | orchestrator | 2025-06-22 20:19:19.584646 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-22 20:19:19.584657 | orchestrator | Sunday 22 June 2025 20:14:19 +0000 (0:00:00.137) 0:03:31.107 *********** 2025-06-22 20:19:19.584668 | orchestrator | 2025-06-22 20:19:19.584679 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-22 20:19:19.584689 | orchestrator | Sunday 22 June 2025 20:14:19 +0000 (0:00:00.133) 0:03:31.241 *********** 2025-06-22 20:19:19.584700 | orchestrator | 2025-06-22 20:19:19.584711 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-22 20:19:19.584722 | orchestrator | Sunday 22 June 2025 20:14:19 +0000 (0:00:00.273) 0:03:31.514 *********** 2025-06-22 20:19:19.584733 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.584743 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:19.584761 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:19.584772 | orchestrator | 2025-06-22 20:19:19.584782 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-22 20:19:19.584793 | orchestrator | Sunday 22 June 2025 20:14:38 +0000 (0:00:18.943) 0:03:50.458 *********** 2025-06-22 20:19:19.584808 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.584826 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:19.584846 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:19.584864 | orchestrator | 2025-06-22 20:19:19.584879 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-22 20:19:19.584894 | orchestrator | 2025-06-22 20:19:19.584913 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:19:19.584932 | orchestrator | Sunday 22 June 2025 20:14:49 +0000 (0:00:10.623) 0:04:01.082 *********** 2025-06-22 20:19:19.584951 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:19.584963 | orchestrator | 2025-06-22 20:19:19.584982 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:19:19.585000 | orchestrator | Sunday 22 June 2025 20:14:50 +0000 (0:00:01.138) 0:04:02.220 *********** 2025-06-22 20:19:19.585019 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.585038 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.585056 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.585073 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.585084 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.585095 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.585106 | orchestrator | 2025-06-22 20:19:19.585117 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-22 20:19:19.585133 | orchestrator | Sunday 22 June 2025 20:14:50 +0000 (0:00:00.714) 0:04:02.934 *********** 2025-06-22 20:19:19.585152 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.585171 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.585191 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.585209 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:19:19.585228 | orchestrator | 2025-06-22 20:19:19.585248 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-22 20:19:19.585277 | orchestrator | Sunday 22 June 2025 20:14:52 +0000 (0:00:01.066) 0:04:04.001 *********** 2025-06-22 20:19:19.585296 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-22 20:19:19.585308 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-22 20:19:19.585318 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-22 20:19:19.585329 | orchestrator | 2025-06-22 20:19:19.585340 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-22 20:19:19.585350 | orchestrator | Sunday 22 June 2025 20:14:52 +0000 (0:00:00.822) 0:04:04.823 *********** 2025-06-22 20:19:19.585361 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-22 20:19:19.585372 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-22 20:19:19.585383 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-22 20:19:19.585412 | orchestrator | 2025-06-22 20:19:19.585423 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-22 20:19:19.585498 | orchestrator | Sunday 22 June 2025 20:14:53 +0000 (0:00:01.115) 0:04:05.939 *********** 2025-06-22 20:19:19.585513 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-22 20:19:19.585524 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.585536 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-22 20:19:19.585546 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.585557 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-22 20:19:19.585568 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.585671 | orchestrator | 2025-06-22 20:19:19.585685 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-22 20:19:19.585696 | orchestrator | Sunday 22 June 2025 20:14:55 +0000 (0:00:01.110) 0:04:07.050 *********** 2025-06-22 20:19:19.585706 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-22 20:19:19.585717 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-22 20:19:19.585728 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 20:19:19.585739 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 20:19:19.585750 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.585760 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 20:19:19.585778 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 20:19:19.585790 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-22 20:19:19.585800 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.585811 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 20:19:19.585822 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 20:19:19.585833 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.585844 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-22 20:19:19.585855 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-22 20:19:19.585865 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-22 20:19:19.585876 | orchestrator | 2025-06-22 20:19:19.585887 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-22 20:19:19.585898 | orchestrator | Sunday 22 June 2025 20:14:56 +0000 (0:00:01.132) 0:04:08.182 *********** 2025-06-22 20:19:19.585909 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.585919 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.585930 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.585941 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:19.585951 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:19.585962 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:19.585973 | orchestrator | 2025-06-22 20:19:19.585983 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-22 20:19:19.585994 | orchestrator | Sunday 22 June 2025 20:14:57 +0000 (0:00:01.240) 0:04:09.423 *********** 2025-06-22 20:19:19.586005 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.586079 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.586108 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.586130 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:19.586141 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:19.586151 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:19.586162 | orchestrator | 2025-06-22 20:19:19.586174 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-22 20:19:19.586195 | orchestrator | Sunday 22 June 2025 20:14:58 +0000 (0:00:01.562) 0:04:10.986 *********** 2025-06-22 20:19:19.586216 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586328 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586367 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586486 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586561 | orchestrator | 2025-06-22 20:19:19.586572 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:19:19.586586 | orchestrator | Sunday 22 June 2025 20:15:01 +0000 (0:00:02.493) 0:04:13.480 *********** 2025-06-22 20:19:19.586605 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:19.586618 | orchestrator | 2025-06-22 20:19:19.586629 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-22 20:19:19.586650 | orchestrator | Sunday 22 June 2025 20:15:02 +0000 (0:00:01.163) 0:04:14.643 *********** 2025-06-22 20:19:19.586661 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586750 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586762 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586782 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586840 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.586881 | orchestrator | 2025-06-22 20:19:19.586893 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-22 20:19:19.586904 | orchestrator | Sunday 22 June 2025 20:15:06 +0000 (0:00:03.678) 0:04:18.322 *********** 2025-06-22 20:19:19.586923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:19.586935 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:19.586951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.586963 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.586975 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:19.586992 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:19.587011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.587022 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.587034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:19.587050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:19.587062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.587079 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.587091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:19.587102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.587114 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.587133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:19.587145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.587156 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.587167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:19.587183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.587195 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.587206 | orchestrator | 2025-06-22 20:19:19.587217 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-22 20:19:19.587234 | orchestrator | Sunday 22 June 2025 20:15:08 +0000 (0:00:01.694) 0:04:20.016 *********** 2025-06-22 20:19:19.587246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:19.587259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:19.587290 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.587312 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.587326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:19.587343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:19.587362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:19.587373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.587385 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.587402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:19.587415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.587426 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.587577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:19.587619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.587650 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.587662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:19.587674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.587685 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.587696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:19.587721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.587732 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.587744 | orchestrator | 2025-06-22 20:19:19.587755 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:19:19.587766 | orchestrator | Sunday 22 June 2025 20:15:09 +0000 (0:00:01.945) 0:04:21.961 *********** 2025-06-22 20:19:19.587778 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.587788 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.587799 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.587810 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:19:19.587821 | orchestrator | 2025-06-22 20:19:19.587832 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-22 20:19:19.587843 | orchestrator | Sunday 22 June 2025 20:15:10 +0000 (0:00:00.779) 0:04:22.740 *********** 2025-06-22 20:19:19.587854 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 20:19:19.587865 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 20:19:19.587882 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 20:19:19.587892 | orchestrator | 2025-06-22 20:19:19.587901 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-22 20:19:19.587911 | orchestrator | Sunday 22 June 2025 20:15:11 +0000 (0:00:00.912) 0:04:23.653 *********** 2025-06-22 20:19:19.587921 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 20:19:19.587930 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 20:19:19.587940 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 20:19:19.587949 | orchestrator | 2025-06-22 20:19:19.587959 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-22 20:19:19.587973 | orchestrator | Sunday 22 June 2025 20:15:12 +0000 (0:00:00.809) 0:04:24.463 *********** 2025-06-22 20:19:19.587983 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:19:19.587993 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:19:19.588002 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:19:19.588012 | orchestrator | 2025-06-22 20:19:19.588021 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-22 20:19:19.588031 | orchestrator | Sunday 22 June 2025 20:15:12 +0000 (0:00:00.445) 0:04:24.908 *********** 2025-06-22 20:19:19.588040 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:19:19.588050 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:19:19.588059 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:19:19.588069 | orchestrator | 2025-06-22 20:19:19.588078 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-22 20:19:19.588088 | orchestrator | Sunday 22 June 2025 20:15:13 +0000 (0:00:00.479) 0:04:25.387 *********** 2025-06-22 20:19:19.588098 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-22 20:19:19.588107 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-22 20:19:19.588117 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-22 20:19:19.588127 | orchestrator | 2025-06-22 20:19:19.588136 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-22 20:19:19.588146 | orchestrator | Sunday 22 June 2025 20:15:14 +0000 (0:00:01.082) 0:04:26.470 *********** 2025-06-22 20:19:19.588155 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-22 20:19:19.588165 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-22 20:19:19.588175 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-22 20:19:19.588184 | orchestrator | 2025-06-22 20:19:19.588194 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-22 20:19:19.588203 | orchestrator | Sunday 22 June 2025 20:15:15 +0000 (0:00:01.109) 0:04:27.580 *********** 2025-06-22 20:19:19.588213 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-22 20:19:19.588222 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-22 20:19:19.588232 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-22 20:19:19.588241 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-22 20:19:19.588251 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-22 20:19:19.588261 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-22 20:19:19.588270 | orchestrator | 2025-06-22 20:19:19.588280 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-22 20:19:19.588290 | orchestrator | Sunday 22 June 2025 20:15:19 +0000 (0:00:03.803) 0:04:31.383 *********** 2025-06-22 20:19:19.588300 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.588309 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.588319 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.588328 | orchestrator | 2025-06-22 20:19:19.588338 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-22 20:19:19.588347 | orchestrator | Sunday 22 June 2025 20:15:19 +0000 (0:00:00.288) 0:04:31.672 *********** 2025-06-22 20:19:19.588357 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.588366 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.588376 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.588391 | orchestrator | 2025-06-22 20:19:19.588401 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-22 20:19:19.588410 | orchestrator | Sunday 22 June 2025 20:15:19 +0000 (0:00:00.276) 0:04:31.949 *********** 2025-06-22 20:19:19.588420 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:19.588430 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:19.588468 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:19.588483 | orchestrator | 2025-06-22 20:19:19.588506 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-22 20:19:19.588522 | orchestrator | Sunday 22 June 2025 20:15:21 +0000 (0:00:01.418) 0:04:33.367 *********** 2025-06-22 20:19:19.588532 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-22 20:19:19.588543 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-22 20:19:19.588553 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-22 20:19:19.588562 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-22 20:19:19.588572 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-22 20:19:19.588582 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-22 20:19:19.588592 | orchestrator | 2025-06-22 20:19:19.588601 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-22 20:19:19.588611 | orchestrator | Sunday 22 June 2025 20:15:24 +0000 (0:00:03.022) 0:04:36.390 *********** 2025-06-22 20:19:19.588620 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:19:19.588630 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:19:19.588639 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:19:19.588649 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:19:19.588658 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:19.588668 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:19:19.588677 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:19.588686 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:19:19.588696 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:19.588705 | orchestrator | 2025-06-22 20:19:19.588720 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-22 20:19:19.588730 | orchestrator | Sunday 22 June 2025 20:15:27 +0000 (0:00:03.072) 0:04:39.463 *********** 2025-06-22 20:19:19.588740 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.588749 | orchestrator | 2025-06-22 20:19:19.588758 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-22 20:19:19.588768 | orchestrator | Sunday 22 June 2025 20:15:27 +0000 (0:00:00.133) 0:04:39.596 *********** 2025-06-22 20:19:19.588777 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.588787 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.588796 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.588806 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.588815 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.588825 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.588834 | orchestrator | 2025-06-22 20:19:19.588844 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-22 20:19:19.588853 | orchestrator | Sunday 22 June 2025 20:15:28 +0000 (0:00:00.748) 0:04:40.344 *********** 2025-06-22 20:19:19.588863 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 20:19:19.588872 | orchestrator | 2025-06-22 20:19:19.588882 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-22 20:19:19.588899 | orchestrator | Sunday 22 June 2025 20:15:29 +0000 (0:00:00.677) 0:04:41.022 *********** 2025-06-22 20:19:19.588908 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.588918 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.588927 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.588937 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.588946 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.588955 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.588965 | orchestrator | 2025-06-22 20:19:19.588974 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-22 20:19:19.588984 | orchestrator | Sunday 22 June 2025 20:15:29 +0000 (0:00:00.590) 0:04:41.612 *********** 2025-06-22 20:19:19.588994 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589013 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589038 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589075 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589154 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589170 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589180 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589191 | orchestrator | 2025-06-22 20:19:19.589200 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-22 20:19:19.589210 | orchestrator | Sunday 22 June 2025 20:15:33 +0000 (0:00:03.896) 0:04:45.508 *********** 2025-06-22 20:19:19.589225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:19.589241 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:19.589251 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:19.589261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:19.589277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:19.589288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:19.589302 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589318 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589428 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.589535 | orchestrator | 2025-06-22 20:19:19.589545 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-22 20:19:19.589554 | orchestrator | Sunday 22 June 2025 20:15:38 +0000 (0:00:05.350) 0:04:50.859 *********** 2025-06-22 20:19:19.589564 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.589574 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.589584 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.589593 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.589603 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.589612 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.589622 | orchestrator | 2025-06-22 20:19:19.589632 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-22 20:19:19.589641 | orchestrator | Sunday 22 June 2025 20:15:40 +0000 (0:00:01.665) 0:04:52.525 *********** 2025-06-22 20:19:19.589653 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-22 20:19:19.589669 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-22 20:19:19.589685 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-22 20:19:19.589701 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-22 20:19:19.589725 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-22 20:19:19.589741 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-22 20:19:19.589751 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.589760 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-22 20:19:19.589770 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-22 20:19:19.589780 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.589789 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-22 20:19:19.589806 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.589816 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-22 20:19:19.589825 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-22 20:19:19.589835 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-22 20:19:19.589844 | orchestrator | 2025-06-22 20:19:19.589854 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-22 20:19:19.589864 | orchestrator | Sunday 22 June 2025 20:15:44 +0000 (0:00:03.735) 0:04:56.260 *********** 2025-06-22 20:19:19.589873 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.589883 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.589892 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.589901 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.589911 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.589921 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.589930 | orchestrator | 2025-06-22 20:19:19.589940 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-22 20:19:19.589949 | orchestrator | Sunday 22 June 2025 20:15:45 +0000 (0:00:00.798) 0:04:57.059 *********** 2025-06-22 20:19:19.589959 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-22 20:19:19.589973 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-22 20:19:19.589983 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-22 20:19:19.589993 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-22 20:19:19.590002 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-22 20:19:19.590012 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-22 20:19:19.590056 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-22 20:19:19.590065 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-22 20:19:19.590075 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-22 20:19:19.590084 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-22 20:19:19.590094 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.590103 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-22 20:19:19.590113 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.590123 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-22 20:19:19.590132 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.590142 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:19:19.590152 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:19:19.590161 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:19:19.590171 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:19:19.590180 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:19:19.590196 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:19:19.590205 | orchestrator | 2025-06-22 20:19:19.590216 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-22 20:19:19.590227 | orchestrator | Sunday 22 June 2025 20:15:50 +0000 (0:00:05.294) 0:05:02.353 *********** 2025-06-22 20:19:19.590237 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 20:19:19.590248 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 20:19:19.590265 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 20:19:19.590277 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-22 20:19:19.590288 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-22 20:19:19.590299 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:19:19.590310 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-22 20:19:19.590320 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:19:19.590331 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:19:19.590342 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 20:19:19.590353 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 20:19:19.590363 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-22 20:19:19.590374 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.590385 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 20:19:19.590396 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-22 20:19:19.590407 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.590418 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-22 20:19:19.590428 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.590492 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:19:19.590504 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:19:19.590515 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:19:19.590532 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:19:19.590543 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:19:19.590554 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:19:19.590564 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:19:19.590575 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:19:19.590586 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:19:19.590597 | orchestrator | 2025-06-22 20:19:19.590608 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-22 20:19:19.590619 | orchestrator | Sunday 22 June 2025 20:15:57 +0000 (0:00:06.689) 0:05:09.042 *********** 2025-06-22 20:19:19.590630 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.590640 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.590651 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.590662 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.590673 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.590691 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.590702 | orchestrator | 2025-06-22 20:19:19.590712 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-22 20:19:19.590723 | orchestrator | Sunday 22 June 2025 20:15:57 +0000 (0:00:00.578) 0:05:09.621 *********** 2025-06-22 20:19:19.590734 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.590745 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.590755 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.590766 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.590777 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.590787 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.590798 | orchestrator | 2025-06-22 20:19:19.590809 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-22 20:19:19.590820 | orchestrator | Sunday 22 June 2025 20:15:58 +0000 (0:00:00.788) 0:05:10.409 *********** 2025-06-22 20:19:19.590830 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.590841 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.590852 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:19.590863 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.590873 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:19.590884 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:19.590894 | orchestrator | 2025-06-22 20:19:19.590905 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-22 20:19:19.590916 | orchestrator | Sunday 22 June 2025 20:16:00 +0000 (0:00:01.985) 0:05:12.395 *********** 2025-06-22 20:19:19.590934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:19.590946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:19.590958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.590974 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.590986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:19.591006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:19.591018 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:19.591037 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:19.591048 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.591065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.591083 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.591094 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.591105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:19.591117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.591128 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.591140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:19.591157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.591169 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.591180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:19.591196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:19.591214 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.591225 | orchestrator | 2025-06-22 20:19:19.591236 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-22 20:19:19.591248 | orchestrator | Sunday 22 June 2025 20:16:02 +0000 (0:00:01.622) 0:05:14.017 *********** 2025-06-22 20:19:19.591258 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-22 20:19:19.591270 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-22 20:19:19.591280 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.591291 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-22 20:19:19.591302 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-22 20:19:19.591313 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.591323 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-22 20:19:19.591334 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-22 20:19:19.591345 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.591356 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-22 20:19:19.591367 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-22 20:19:19.591378 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.591388 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-22 20:19:19.591399 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-22 20:19:19.591410 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.591421 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-22 20:19:19.591431 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-22 20:19:19.591493 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.591505 | orchestrator | 2025-06-22 20:19:19.591516 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-22 20:19:19.591527 | orchestrator | Sunday 22 June 2025 20:16:02 +0000 (0:00:00.623) 0:05:14.641 *********** 2025-06-22 20:19:19.591539 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591578 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591595 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591607 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591649 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591667 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591738 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:19.591767 | orchestrator | 2025-06-22 20:19:19.591779 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:19:19.591790 | orchestrator | Sunday 22 June 2025 20:16:05 +0000 (0:00:02.885) 0:05:17.527 *********** 2025-06-22 20:19:19.591801 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.591812 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.591823 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.591833 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.591844 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.591855 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.591866 | orchestrator | 2025-06-22 20:19:19.591877 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:19:19.591887 | orchestrator | Sunday 22 June 2025 20:16:06 +0000 (0:00:00.542) 0:05:18.069 *********** 2025-06-22 20:19:19.591899 | orchestrator | 2025-06-22 20:19:19.591909 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:19:19.591925 | orchestrator | Sunday 22 June 2025 20:16:06 +0000 (0:00:00.303) 0:05:18.373 *********** 2025-06-22 20:19:19.591936 | orchestrator | 2025-06-22 20:19:19.591947 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:19:19.591958 | orchestrator | Sunday 22 June 2025 20:16:06 +0000 (0:00:00.132) 0:05:18.505 *********** 2025-06-22 20:19:19.591968 | orchestrator | 2025-06-22 20:19:19.591979 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:19:19.591991 | orchestrator | Sunday 22 June 2025 20:16:06 +0000 (0:00:00.134) 0:05:18.640 *********** 2025-06-22 20:19:19.592001 | orchestrator | 2025-06-22 20:19:19.592012 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:19:19.592023 | orchestrator | Sunday 22 June 2025 20:16:06 +0000 (0:00:00.135) 0:05:18.775 *********** 2025-06-22 20:19:19.592034 | orchestrator | 2025-06-22 20:19:19.592045 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:19:19.592055 | orchestrator | Sunday 22 June 2025 20:16:06 +0000 (0:00:00.126) 0:05:18.901 *********** 2025-06-22 20:19:19.592066 | orchestrator | 2025-06-22 20:19:19.592077 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-22 20:19:19.592088 | orchestrator | Sunday 22 June 2025 20:16:07 +0000 (0:00:00.129) 0:05:19.031 *********** 2025-06-22 20:19:19.592099 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:19.592110 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:19.592121 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.592131 | orchestrator | 2025-06-22 20:19:19.592142 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-22 20:19:19.592153 | orchestrator | Sunday 22 June 2025 20:16:17 +0000 (0:00:10.004) 0:05:29.036 *********** 2025-06-22 20:19:19.592164 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.592175 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:19.592185 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:19.592196 | orchestrator | 2025-06-22 20:19:19.592207 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-22 20:19:19.592225 | orchestrator | Sunday 22 June 2025 20:16:28 +0000 (0:00:11.578) 0:05:40.614 *********** 2025-06-22 20:19:19.592235 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:19.592246 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:19.592257 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:19.592268 | orchestrator | 2025-06-22 20:19:19.592278 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-22 20:19:19.592289 | orchestrator | Sunday 22 June 2025 20:16:53 +0000 (0:00:24.789) 0:06:05.404 *********** 2025-06-22 20:19:19.592300 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:19.592311 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:19.592322 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:19.592332 | orchestrator | 2025-06-22 20:19:19.592343 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-22 20:19:19.592354 | orchestrator | Sunday 22 June 2025 20:17:41 +0000 (0:00:47.643) 0:06:53.047 *********** 2025-06-22 20:19:19.592365 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:19.592376 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:19.592387 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:19.592398 | orchestrator | 2025-06-22 20:19:19.592408 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-22 20:19:19.592420 | orchestrator | Sunday 22 June 2025 20:17:42 +0000 (0:00:01.074) 0:06:54.122 *********** 2025-06-22 20:19:19.592430 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:19.592467 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:19.592486 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:19.592503 | orchestrator | 2025-06-22 20:19:19.592521 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-22 20:19:19.592538 | orchestrator | Sunday 22 June 2025 20:17:42 +0000 (0:00:00.788) 0:06:54.910 *********** 2025-06-22 20:19:19.592550 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:19.592560 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:19.592571 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:19.592582 | orchestrator | 2025-06-22 20:19:19.592593 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-22 20:19:19.592604 | orchestrator | Sunday 22 June 2025 20:18:13 +0000 (0:00:30.880) 0:07:25.791 *********** 2025-06-22 20:19:19.592615 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.592625 | orchestrator | 2025-06-22 20:19:19.592636 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-22 20:19:19.592647 | orchestrator | Sunday 22 June 2025 20:18:13 +0000 (0:00:00.156) 0:07:25.947 *********** 2025-06-22 20:19:19.592658 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.592668 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.592679 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.592690 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.592701 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.592712 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-22 20:19:19.592723 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:19:19.592733 | orchestrator | 2025-06-22 20:19:19.592744 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-22 20:19:19.592755 | orchestrator | Sunday 22 June 2025 20:18:36 +0000 (0:00:22.534) 0:07:48.481 *********** 2025-06-22 20:19:19.592766 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.592776 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.592787 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.592798 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.592809 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.592819 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.592830 | orchestrator | 2025-06-22 20:19:19.592841 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-22 20:19:19.592861 | orchestrator | Sunday 22 June 2025 20:18:44 +0000 (0:00:07.598) 0:07:56.080 *********** 2025-06-22 20:19:19.592872 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.592882 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.592893 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.592909 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.592920 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.592931 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-06-22 20:19:19.592941 | orchestrator | 2025-06-22 20:19:19.592952 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-22 20:19:19.592963 | orchestrator | Sunday 22 June 2025 20:18:47 +0000 (0:00:03.543) 0:07:59.623 *********** 2025-06-22 20:19:19.592974 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:19:19.592985 | orchestrator | 2025-06-22 20:19:19.592996 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-22 20:19:19.593007 | orchestrator | Sunday 22 June 2025 20:18:58 +0000 (0:00:10.583) 0:08:10.207 *********** 2025-06-22 20:19:19.593018 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:19:19.593028 | orchestrator | 2025-06-22 20:19:19.593040 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-22 20:19:19.593051 | orchestrator | Sunday 22 June 2025 20:18:59 +0000 (0:00:01.152) 0:08:11.359 *********** 2025-06-22 20:19:19.593061 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.593072 | orchestrator | 2025-06-22 20:19:19.593083 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-22 20:19:19.593094 | orchestrator | Sunday 22 June 2025 20:19:00 +0000 (0:00:01.289) 0:08:12.649 *********** 2025-06-22 20:19:19.593104 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:19:19.593115 | orchestrator | 2025-06-22 20:19:19.593126 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-22 20:19:19.593137 | orchestrator | Sunday 22 June 2025 20:19:11 +0000 (0:00:10.477) 0:08:23.127 *********** 2025-06-22 20:19:19.593147 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:19:19.593158 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:19:19.593169 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:19:19.593180 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:19.593190 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:19:19.593201 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:19:19.593211 | orchestrator | 2025-06-22 20:19:19.593222 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-22 20:19:19.593233 | orchestrator | 2025-06-22 20:19:19.593244 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-22 20:19:19.593255 | orchestrator | Sunday 22 June 2025 20:19:12 +0000 (0:00:01.749) 0:08:24.877 *********** 2025-06-22 20:19:19.593265 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:19.593276 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:19.593287 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:19.593297 | orchestrator | 2025-06-22 20:19:19.593308 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-22 20:19:19.593319 | orchestrator | 2025-06-22 20:19:19.593330 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-22 20:19:19.593341 | orchestrator | Sunday 22 June 2025 20:19:13 +0000 (0:00:01.109) 0:08:25.986 *********** 2025-06-22 20:19:19.593352 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.593362 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.593373 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.593384 | orchestrator | 2025-06-22 20:19:19.593395 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-22 20:19:19.593406 | orchestrator | 2025-06-22 20:19:19.593417 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-22 20:19:19.593428 | orchestrator | Sunday 22 June 2025 20:19:14 +0000 (0:00:00.530) 0:08:26.516 *********** 2025-06-22 20:19:19.593498 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-22 20:19:19.593518 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-22 20:19:19.593530 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-22 20:19:19.593541 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-22 20:19:19.593552 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-22 20:19:19.593563 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-22 20:19:19.593574 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:19.593584 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-22 20:19:19.593595 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-22 20:19:19.593606 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-22 20:19:19.593617 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-22 20:19:19.593628 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-22 20:19:19.593639 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-22 20:19:19.593650 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:19.593661 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-22 20:19:19.593672 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-22 20:19:19.593683 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-22 20:19:19.593694 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-22 20:19:19.593705 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-22 20:19:19.593716 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-22 20:19:19.593726 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:19.593737 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-22 20:19:19.593748 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-22 20:19:19.593759 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-22 20:19:19.593770 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-22 20:19:19.593781 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-22 20:19:19.593792 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-22 20:19:19.593803 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.593819 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-22 20:19:19.593830 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-22 20:19:19.593841 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-22 20:19:19.593852 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-22 20:19:19.593863 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-22 20:19:19.593874 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-22 20:19:19.593885 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.593896 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-22 20:19:19.593906 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-22 20:19:19.593917 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-22 20:19:19.593928 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-22 20:19:19.593939 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-22 20:19:19.593950 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-22 20:19:19.593960 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.593969 | orchestrator | 2025-06-22 20:19:19.593979 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-22 20:19:19.593989 | orchestrator | 2025-06-22 20:19:19.593999 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-22 20:19:19.594041 | orchestrator | Sunday 22 June 2025 20:19:15 +0000 (0:00:01.297) 0:08:27.813 *********** 2025-06-22 20:19:19.594054 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-22 20:19:19.594063 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-22 20:19:19.594073 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.594082 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-22 20:19:19.594092 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-22 20:19:19.594101 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.594111 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-22 20:19:19.594120 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-22 20:19:19.594130 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.594139 | orchestrator | 2025-06-22 20:19:19.594149 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-22 20:19:19.594158 | orchestrator | 2025-06-22 20:19:19.594168 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-22 20:19:19.594177 | orchestrator | Sunday 22 June 2025 20:19:16 +0000 (0:00:00.716) 0:08:28.530 *********** 2025-06-22 20:19:19.594187 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.594196 | orchestrator | 2025-06-22 20:19:19.594206 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-22 20:19:19.594215 | orchestrator | 2025-06-22 20:19:19.594225 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-22 20:19:19.594234 | orchestrator | Sunday 22 June 2025 20:19:17 +0000 (0:00:00.666) 0:08:29.197 *********** 2025-06-22 20:19:19.594244 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:19.594253 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:19.594263 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:19.594272 | orchestrator | 2025-06-22 20:19:19.594282 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:19:19.594292 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:19:19.594308 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-22 20:19:19.594318 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-22 20:19:19.594328 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-22 20:19:19.594338 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-22 20:19:19.594347 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-22 20:19:19.594357 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-06-22 20:19:19.594367 | orchestrator | 2025-06-22 20:19:19.594377 | orchestrator | 2025-06-22 20:19:19.594386 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:19:19.594396 | orchestrator | Sunday 22 June 2025 20:19:17 +0000 (0:00:00.450) 0:08:29.647 *********** 2025-06-22 20:19:19.594406 | orchestrator | =============================================================================== 2025-06-22 20:19:19.594415 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 47.64s 2025-06-22 20:19:19.594425 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.09s 2025-06-22 20:19:19.594458 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 30.88s 2025-06-22 20:19:19.594483 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 24.79s 2025-06-22 20:19:19.594497 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.53s 2025-06-22 20:19:19.594513 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.94s 2025-06-22 20:19:19.594523 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 17.80s 2025-06-22 20:19:19.594532 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.84s 2025-06-22 20:19:19.594542 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.58s 2025-06-22 20:19:19.594551 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.82s 2025-06-22 20:19:19.594561 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.73s 2025-06-22 20:19:19.594570 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.58s 2025-06-22 20:19:19.594579 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.62s 2025-06-22 20:19:19.594589 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.58s 2025-06-22 20:19:19.594599 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.48s 2025-06-22 20:19:19.594608 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.28s 2025-06-22 20:19:19.594617 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 10.00s 2025-06-22 20:19:19.594627 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.57s 2025-06-22 20:19:19.594636 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.10s 2025-06-22 20:19:19.594646 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 7.60s 2025-06-22 20:19:19.594655 | orchestrator | 2025-06-22 20:19:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:22.623854 | orchestrator | 2025-06-22 20:19:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:25.665925 | orchestrator | 2025-06-22 20:19:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:28.709007 | orchestrator | 2025-06-22 20:19:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:31.742484 | orchestrator | 2025-06-22 20:19:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:34.781355 | orchestrator | 2025-06-22 20:19:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:37.824287 | orchestrator | 2025-06-22 20:19:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:40.869682 | orchestrator | 2025-06-22 20:19:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:43.910409 | orchestrator | 2025-06-22 20:19:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:46.948949 | orchestrator | 2025-06-22 20:19:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:49.994868 | orchestrator | 2025-06-22 20:19:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:53.032915 | orchestrator | 2025-06-22 20:19:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:56.075144 | orchestrator | 2025-06-22 20:19:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:59.122067 | orchestrator | 2025-06-22 20:19:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:20:02.166303 | orchestrator | 2025-06-22 20:20:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:20:05.203676 | orchestrator | 2025-06-22 20:20:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:20:08.246663 | orchestrator | 2025-06-22 20:20:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:20:11.288701 | orchestrator | 2025-06-22 20:20:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:20:14.324004 | orchestrator | 2025-06-22 20:20:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:20:17.361323 | orchestrator | 2025-06-22 20:20:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:20:20.399518 | orchestrator | 2025-06-22 20:20:20.646272 | orchestrator | 2025-06-22 20:20:20.652660 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Jun 22 20:20:20 UTC 2025 2025-06-22 20:20:20.652899 | orchestrator | 2025-06-22 20:20:20.981004 | orchestrator | ok: Runtime: 0:34:51.686087 2025-06-22 20:20:21.223326 | 2025-06-22 20:20:21.223469 | TASK [Bootstrap services] 2025-06-22 20:20:21.941109 | orchestrator | 2025-06-22 20:20:21.941298 | orchestrator | # BOOTSTRAP 2025-06-22 20:20:21.941323 | orchestrator | 2025-06-22 20:20:21.941337 | orchestrator | + set -e 2025-06-22 20:20:21.941351 | orchestrator | + echo 2025-06-22 20:20:21.941364 | orchestrator | + echo '# BOOTSTRAP' 2025-06-22 20:20:21.941382 | orchestrator | + echo 2025-06-22 20:20:21.941450 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-22 20:20:21.950608 | orchestrator | + set -e 2025-06-22 20:20:21.951106 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-22 20:20:25.708103 | orchestrator | 2025-06-22 20:20:25 | INFO  | It takes a moment until task a8471373-80e3-495b-8df9-6d85d05f1d44 (flavor-manager) has been started and output is visible here. 2025-06-22 20:20:33.533900 | orchestrator | 2025-06-22 20:20:29 | INFO  | Flavor SCS-1V-4 created 2025-06-22 20:20:33.534121 | orchestrator | 2025-06-22 20:20:29 | INFO  | Flavor SCS-2V-8 created 2025-06-22 20:20:33.534147 | orchestrator | 2025-06-22 20:20:29 | INFO  | Flavor SCS-4V-16 created 2025-06-22 20:20:33.534161 | orchestrator | 2025-06-22 20:20:30 | INFO  | Flavor SCS-8V-32 created 2025-06-22 20:20:33.534173 | orchestrator | 2025-06-22 20:20:30 | INFO  | Flavor SCS-1V-2 created 2025-06-22 20:20:33.534184 | orchestrator | 2025-06-22 20:20:30 | INFO  | Flavor SCS-2V-4 created 2025-06-22 20:20:33.534196 | orchestrator | 2025-06-22 20:20:30 | INFO  | Flavor SCS-4V-8 created 2025-06-22 20:20:33.534208 | orchestrator | 2025-06-22 20:20:30 | INFO  | Flavor SCS-8V-16 created 2025-06-22 20:20:33.534231 | orchestrator | 2025-06-22 20:20:30 | INFO  | Flavor SCS-16V-32 created 2025-06-22 20:20:33.534243 | orchestrator | 2025-06-22 20:20:30 | INFO  | Flavor SCS-1V-8 created 2025-06-22 20:20:33.534254 | orchestrator | 2025-06-22 20:20:30 | INFO  | Flavor SCS-2V-16 created 2025-06-22 20:20:33.534266 | orchestrator | 2025-06-22 20:20:31 | INFO  | Flavor SCS-4V-32 created 2025-06-22 20:20:33.534277 | orchestrator | 2025-06-22 20:20:31 | INFO  | Flavor SCS-1L-1 created 2025-06-22 20:20:33.534288 | orchestrator | 2025-06-22 20:20:31 | INFO  | Flavor SCS-2V-4-20s created 2025-06-22 20:20:33.534299 | orchestrator | 2025-06-22 20:20:31 | INFO  | Flavor SCS-4V-16-100s created 2025-06-22 20:20:33.534310 | orchestrator | 2025-06-22 20:20:31 | INFO  | Flavor SCS-1V-4-10 created 2025-06-22 20:20:33.534321 | orchestrator | 2025-06-22 20:20:31 | INFO  | Flavor SCS-2V-8-20 created 2025-06-22 20:20:33.534332 | orchestrator | 2025-06-22 20:20:31 | INFO  | Flavor SCS-4V-16-50 created 2025-06-22 20:20:33.534343 | orchestrator | 2025-06-22 20:20:32 | INFO  | Flavor SCS-8V-32-100 created 2025-06-22 20:20:33.534355 | orchestrator | 2025-06-22 20:20:32 | INFO  | Flavor SCS-1V-2-5 created 2025-06-22 20:20:33.534366 | orchestrator | 2025-06-22 20:20:32 | INFO  | Flavor SCS-2V-4-10 created 2025-06-22 20:20:33.534377 | orchestrator | 2025-06-22 20:20:32 | INFO  | Flavor SCS-4V-8-20 created 2025-06-22 20:20:33.534422 | orchestrator | 2025-06-22 20:20:32 | INFO  | Flavor SCS-8V-16-50 created 2025-06-22 20:20:33.534436 | orchestrator | 2025-06-22 20:20:32 | INFO  | Flavor SCS-16V-32-100 created 2025-06-22 20:20:33.534447 | orchestrator | 2025-06-22 20:20:32 | INFO  | Flavor SCS-1V-8-20 created 2025-06-22 20:20:33.534458 | orchestrator | 2025-06-22 20:20:33 | INFO  | Flavor SCS-2V-16-50 created 2025-06-22 20:20:33.534468 | orchestrator | 2025-06-22 20:20:33 | INFO  | Flavor SCS-4V-32-100 created 2025-06-22 20:20:33.534480 | orchestrator | 2025-06-22 20:20:33 | INFO  | Flavor SCS-1L-1-5 created 2025-06-22 20:20:35.502816 | orchestrator | 2025-06-22 20:20:35 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-22 20:20:35.507240 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:20:35.507288 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:20:35.507334 | orchestrator | Registering Redlock._release_script 2025-06-22 20:20:35.563234 | orchestrator | 2025-06-22 20:20:35 | INFO  | Task af4fc349-8f70-4beb-9f76-a7aa315ab6f0 (bootstrap-basic) was prepared for execution. 2025-06-22 20:20:35.563326 | orchestrator | 2025-06-22 20:20:35 | INFO  | It takes a moment until task af4fc349-8f70-4beb-9f76-a7aa315ab6f0 (bootstrap-basic) has been started and output is visible here. 2025-06-22 20:21:36.743886 | orchestrator | 2025-06-22 20:21:36.744000 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-22 20:21:36.744017 | orchestrator | 2025-06-22 20:21:36.744030 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 20:21:36.744044 | orchestrator | Sunday 22 June 2025 20:20:39 +0000 (0:00:00.075) 0:00:00.075 *********** 2025-06-22 20:21:36.744055 | orchestrator | ok: [localhost] 2025-06-22 20:21:36.744068 | orchestrator | 2025-06-22 20:21:36.744079 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-22 20:21:36.744090 | orchestrator | Sunday 22 June 2025 20:20:41 +0000 (0:00:01.778) 0:00:01.854 *********** 2025-06-22 20:21:36.744101 | orchestrator | ok: [localhost] 2025-06-22 20:21:36.744112 | orchestrator | 2025-06-22 20:21:36.744122 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-22 20:21:36.744133 | orchestrator | Sunday 22 June 2025 20:20:48 +0000 (0:00:07.686) 0:00:09.541 *********** 2025-06-22 20:21:36.744144 | orchestrator | changed: [localhost] 2025-06-22 20:21:36.744156 | orchestrator | 2025-06-22 20:21:36.744167 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-22 20:21:36.744178 | orchestrator | Sunday 22 June 2025 20:20:56 +0000 (0:00:07.269) 0:00:16.810 *********** 2025-06-22 20:21:36.744189 | orchestrator | ok: [localhost] 2025-06-22 20:21:36.744199 | orchestrator | 2025-06-22 20:21:36.744215 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-22 20:21:36.744226 | orchestrator | Sunday 22 June 2025 20:21:02 +0000 (0:00:06.336) 0:00:23.147 *********** 2025-06-22 20:21:36.744237 | orchestrator | changed: [localhost] 2025-06-22 20:21:36.744248 | orchestrator | 2025-06-22 20:21:36.744259 | orchestrator | TASK [Create public network] *************************************************** 2025-06-22 20:21:36.744270 | orchestrator | Sunday 22 June 2025 20:21:09 +0000 (0:00:06.638) 0:00:29.786 *********** 2025-06-22 20:21:36.744280 | orchestrator | changed: [localhost] 2025-06-22 20:21:36.744291 | orchestrator | 2025-06-22 20:21:36.744320 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-22 20:21:36.744331 | orchestrator | Sunday 22 June 2025 20:21:16 +0000 (0:00:06.927) 0:00:36.713 *********** 2025-06-22 20:21:36.744342 | orchestrator | changed: [localhost] 2025-06-22 20:21:36.744353 | orchestrator | 2025-06-22 20:21:36.744364 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-22 20:21:36.744374 | orchestrator | Sunday 22 June 2025 20:21:23 +0000 (0:00:07.450) 0:00:44.163 *********** 2025-06-22 20:21:36.744385 | orchestrator | changed: [localhost] 2025-06-22 20:21:36.744435 | orchestrator | 2025-06-22 20:21:36.744449 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-22 20:21:36.744460 | orchestrator | Sunday 22 June 2025 20:21:28 +0000 (0:00:04.867) 0:00:49.031 *********** 2025-06-22 20:21:36.744470 | orchestrator | changed: [localhost] 2025-06-22 20:21:36.744481 | orchestrator | 2025-06-22 20:21:36.744492 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-22 20:21:36.744503 | orchestrator | Sunday 22 June 2025 20:21:32 +0000 (0:00:04.476) 0:00:53.507 *********** 2025-06-22 20:21:36.744514 | orchestrator | ok: [localhost] 2025-06-22 20:21:36.744524 | orchestrator | 2025-06-22 20:21:36.744535 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:21:36.744546 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:21:36.744558 | orchestrator | 2025-06-22 20:21:36.744584 | orchestrator | 2025-06-22 20:21:36.744696 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:21:36.744716 | orchestrator | Sunday 22 June 2025 20:21:36 +0000 (0:00:03.569) 0:00:57.077 *********** 2025-06-22 20:21:36.744728 | orchestrator | =============================================================================== 2025-06-22 20:21:36.744747 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.69s 2025-06-22 20:21:36.744765 | orchestrator | Set public network to default ------------------------------------------- 7.45s 2025-06-22 20:21:36.744783 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.27s 2025-06-22 20:21:36.744826 | orchestrator | Create public network --------------------------------------------------- 6.93s 2025-06-22 20:21:36.744842 | orchestrator | Create volume type local ------------------------------------------------ 6.64s 2025-06-22 20:21:36.744852 | orchestrator | Get volume type local --------------------------------------------------- 6.34s 2025-06-22 20:21:36.744863 | orchestrator | Create public subnet ---------------------------------------------------- 4.87s 2025-06-22 20:21:36.744874 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.48s 2025-06-22 20:21:36.744884 | orchestrator | Create manager role ----------------------------------------------------- 3.57s 2025-06-22 20:21:36.744895 | orchestrator | Gathering Facts --------------------------------------------------------- 1.78s 2025-06-22 20:21:38.984985 | orchestrator | 2025-06-22 20:21:38 | INFO  | It takes a moment until task 3955c079-afa5-428d-a34f-e2bf34e4a299 (image-manager) has been started and output is visible here. 2025-06-22 20:22:19.942659 | orchestrator | 2025-06-22 20:21:42 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-22 20:22:19.942782 | orchestrator | 2025-06-22 20:21:42 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-22 20:22:19.942809 | orchestrator | 2025-06-22 20:21:42 | INFO  | Importing image Cirros 0.6.2 2025-06-22 20:22:19.942830 | orchestrator | 2025-06-22 20:21:42 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-22 20:22:19.942850 | orchestrator | 2025-06-22 20:21:44 | INFO  | Waiting for image to leave queued state... 2025-06-22 20:22:19.942879 | orchestrator | 2025-06-22 20:21:46 | INFO  | Waiting for import to complete... 2025-06-22 20:22:19.942898 | orchestrator | 2025-06-22 20:21:56 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-22 20:22:19.942919 | orchestrator | 2025-06-22 20:21:56 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-22 20:22:19.942937 | orchestrator | 2025-06-22 20:21:56 | INFO  | Setting internal_version = 0.6.2 2025-06-22 20:22:19.942955 | orchestrator | 2025-06-22 20:21:56 | INFO  | Setting image_original_user = cirros 2025-06-22 20:22:19.942971 | orchestrator | 2025-06-22 20:21:56 | INFO  | Adding tag os:cirros 2025-06-22 20:22:19.942988 | orchestrator | 2025-06-22 20:21:57 | INFO  | Setting property architecture: x86_64 2025-06-22 20:22:19.943007 | orchestrator | 2025-06-22 20:21:57 | INFO  | Setting property hw_disk_bus: scsi 2025-06-22 20:22:19.943024 | orchestrator | 2025-06-22 20:21:57 | INFO  | Setting property hw_rng_model: virtio 2025-06-22 20:22:19.943043 | orchestrator | 2025-06-22 20:21:57 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-22 20:22:19.943062 | orchestrator | 2025-06-22 20:21:58 | INFO  | Setting property hw_watchdog_action: reset 2025-06-22 20:22:19.943079 | orchestrator | 2025-06-22 20:21:58 | INFO  | Setting property hypervisor_type: qemu 2025-06-22 20:22:19.943091 | orchestrator | 2025-06-22 20:21:58 | INFO  | Setting property os_distro: cirros 2025-06-22 20:22:19.943113 | orchestrator | 2025-06-22 20:21:58 | INFO  | Setting property replace_frequency: never 2025-06-22 20:22:19.943149 | orchestrator | 2025-06-22 20:21:58 | INFO  | Setting property uuid_validity: none 2025-06-22 20:22:19.943162 | orchestrator | 2025-06-22 20:21:59 | INFO  | Setting property provided_until: none 2025-06-22 20:22:19.943178 | orchestrator | 2025-06-22 20:21:59 | INFO  | Setting property image_description: Cirros 2025-06-22 20:22:19.943191 | orchestrator | 2025-06-22 20:21:59 | INFO  | Setting property image_name: Cirros 2025-06-22 20:22:19.943203 | orchestrator | 2025-06-22 20:21:59 | INFO  | Setting property internal_version: 0.6.2 2025-06-22 20:22:19.943215 | orchestrator | 2025-06-22 20:21:59 | INFO  | Setting property image_original_user: cirros 2025-06-22 20:22:19.943227 | orchestrator | 2025-06-22 20:22:00 | INFO  | Setting property os_version: 0.6.2 2025-06-22 20:22:19.943239 | orchestrator | 2025-06-22 20:22:00 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-22 20:22:19.943254 | orchestrator | 2025-06-22 20:22:00 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-22 20:22:19.943266 | orchestrator | 2025-06-22 20:22:00 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-22 20:22:19.943278 | orchestrator | 2025-06-22 20:22:00 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-22 20:22:19.943295 | orchestrator | 2025-06-22 20:22:00 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-22 20:22:19.943315 | orchestrator | 2025-06-22 20:22:00 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-22 20:22:19.943334 | orchestrator | 2025-06-22 20:22:01 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-22 20:22:19.943351 | orchestrator | 2025-06-22 20:22:01 | INFO  | Importing image Cirros 0.6.3 2025-06-22 20:22:19.943366 | orchestrator | 2025-06-22 20:22:01 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-22 20:22:19.943385 | orchestrator | 2025-06-22 20:22:02 | INFO  | Waiting for image to leave queued state... 2025-06-22 20:22:19.943405 | orchestrator | 2025-06-22 20:22:04 | INFO  | Waiting for import to complete... 2025-06-22 20:22:19.943457 | orchestrator | 2025-06-22 20:22:14 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-22 20:22:19.943504 | orchestrator | 2025-06-22 20:22:15 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-22 20:22:19.943519 | orchestrator | 2025-06-22 20:22:15 | INFO  | Setting internal_version = 0.6.3 2025-06-22 20:22:19.943531 | orchestrator | 2025-06-22 20:22:15 | INFO  | Setting image_original_user = cirros 2025-06-22 20:22:19.943542 | orchestrator | 2025-06-22 20:22:15 | INFO  | Adding tag os:cirros 2025-06-22 20:22:19.943553 | orchestrator | 2025-06-22 20:22:15 | INFO  | Setting property architecture: x86_64 2025-06-22 20:22:19.943564 | orchestrator | 2025-06-22 20:22:15 | INFO  | Setting property hw_disk_bus: scsi 2025-06-22 20:22:19.943574 | orchestrator | 2025-06-22 20:22:15 | INFO  | Setting property hw_rng_model: virtio 2025-06-22 20:22:19.943585 | orchestrator | 2025-06-22 20:22:15 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-22 20:22:19.943595 | orchestrator | 2025-06-22 20:22:16 | INFO  | Setting property hw_watchdog_action: reset 2025-06-22 20:22:19.943606 | orchestrator | 2025-06-22 20:22:16 | INFO  | Setting property hypervisor_type: qemu 2025-06-22 20:22:19.943616 | orchestrator | 2025-06-22 20:22:16 | INFO  | Setting property os_distro: cirros 2025-06-22 20:22:19.943632 | orchestrator | 2025-06-22 20:22:16 | INFO  | Setting property replace_frequency: never 2025-06-22 20:22:19.943665 | orchestrator | 2025-06-22 20:22:16 | INFO  | Setting property uuid_validity: none 2025-06-22 20:22:19.943680 | orchestrator | 2025-06-22 20:22:17 | INFO  | Setting property provided_until: none 2025-06-22 20:22:19.943691 | orchestrator | 2025-06-22 20:22:17 | INFO  | Setting property image_description: Cirros 2025-06-22 20:22:19.943701 | orchestrator | 2025-06-22 20:22:17 | INFO  | Setting property image_name: Cirros 2025-06-22 20:22:19.943712 | orchestrator | 2025-06-22 20:22:18 | INFO  | Setting property internal_version: 0.6.3 2025-06-22 20:22:19.943743 | orchestrator | 2025-06-22 20:22:18 | INFO  | Setting property image_original_user: cirros 2025-06-22 20:22:19.943774 | orchestrator | 2025-06-22 20:22:18 | INFO  | Setting property os_version: 0.6.3 2025-06-22 20:22:19.943785 | orchestrator | 2025-06-22 20:22:18 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-22 20:22:19.943808 | orchestrator | 2025-06-22 20:22:18 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-22 20:22:19.943830 | orchestrator | 2025-06-22 20:22:19 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-22 20:22:19.943844 | orchestrator | 2025-06-22 20:22:19 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-22 20:22:19.943862 | orchestrator | 2025-06-22 20:22:19 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-22 20:22:20.205973 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-22 20:22:22.149632 | orchestrator | 2025-06-22 20:22:22 | INFO  | date: 2025-06-22 2025-06-22 20:22:22.149734 | orchestrator | 2025-06-22 20:22:22 | INFO  | image: octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 20:22:22.149752 | orchestrator | 2025-06-22 20:22:22 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 20:22:22.149789 | orchestrator | 2025-06-22 20:22:22 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2.CHECKSUM 2025-06-22 20:22:22.167888 | orchestrator | 2025-06-22 20:22:22 | INFO  | checksum: 77df9fefb5aab55dc760a767e58162a9735f5740229c1da42280293548a761a7 2025-06-22 20:22:22.234206 | orchestrator | 2025-06-22 20:22:22 | INFO  | It takes a moment until task 6cca02d1-1872-4c11-a1d7-d15c1c1ac160 (image-manager) has been started and output is visible here. 2025-06-22 20:23:22.772150 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-22 20:23:22.772262 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-22 20:23:22.772279 | orchestrator | 2025-06-22 20:22:24 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 20:23:22.772294 | orchestrator | 2025-06-22 20:22:24 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2: 200 2025-06-22 20:23:22.772307 | orchestrator | 2025-06-22 20:22:24 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-22 2025-06-22 20:23:22.772318 | orchestrator | 2025-06-22 20:22:24 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 20:23:22.772356 | orchestrator | 2025-06-22 20:22:25 | INFO  | Waiting for image to leave queued state... 2025-06-22 20:23:22.772369 | orchestrator | 2025-06-22 20:22:27 | INFO  | Waiting for import to complete... 2025-06-22 20:23:22.772380 | orchestrator | 2025-06-22 20:22:37 | INFO  | Waiting for import to complete... 2025-06-22 20:23:22.772391 | orchestrator | 2025-06-22 20:22:47 | INFO  | Waiting for import to complete... 2025-06-22 20:23:22.772403 | orchestrator | 2025-06-22 20:22:57 | INFO  | Waiting for import to complete... 2025-06-22 20:23:22.772413 | orchestrator | 2025-06-22 20:23:07 | INFO  | Waiting for import to complete... 2025-06-22 20:23:22.772435 | orchestrator | 2025-06-22 20:23:17 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-22' successfully completed, reloading images 2025-06-22 20:23:22.772487 | orchestrator | 2025-06-22 20:23:18 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 20:23:22.772500 | orchestrator | 2025-06-22 20:23:18 | INFO  | Setting internal_version = 2025-06-22 2025-06-22 20:23:22.772511 | orchestrator | 2025-06-22 20:23:18 | INFO  | Setting image_original_user = ubuntu 2025-06-22 20:23:22.772521 | orchestrator | 2025-06-22 20:23:18 | INFO  | Adding tag amphora 2025-06-22 20:23:22.772533 | orchestrator | 2025-06-22 20:23:18 | INFO  | Adding tag os:ubuntu 2025-06-22 20:23:22.772543 | orchestrator | 2025-06-22 20:23:18 | INFO  | Setting property architecture: x86_64 2025-06-22 20:23:22.772554 | orchestrator | 2025-06-22 20:23:18 | INFO  | Setting property hw_disk_bus: scsi 2025-06-22 20:23:22.772565 | orchestrator | 2025-06-22 20:23:19 | INFO  | Setting property hw_rng_model: virtio 2025-06-22 20:23:22.772575 | orchestrator | 2025-06-22 20:23:19 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-22 20:23:22.772586 | orchestrator | 2025-06-22 20:23:19 | INFO  | Setting property hw_watchdog_action: reset 2025-06-22 20:23:22.772597 | orchestrator | 2025-06-22 20:23:19 | INFO  | Setting property hypervisor_type: qemu 2025-06-22 20:23:22.772608 | orchestrator | 2025-06-22 20:23:19 | INFO  | Setting property os_distro: ubuntu 2025-06-22 20:23:22.772619 | orchestrator | 2025-06-22 20:23:20 | INFO  | Setting property replace_frequency: quarterly 2025-06-22 20:23:22.772630 | orchestrator | 2025-06-22 20:23:20 | INFO  | Setting property uuid_validity: last-1 2025-06-22 20:23:22.772641 | orchestrator | 2025-06-22 20:23:20 | INFO  | Setting property provided_until: none 2025-06-22 20:23:22.772651 | orchestrator | 2025-06-22 20:23:20 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-22 20:23:22.772662 | orchestrator | 2025-06-22 20:23:21 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-22 20:23:22.772675 | orchestrator | 2025-06-22 20:23:21 | INFO  | Setting property internal_version: 2025-06-22 2025-06-22 20:23:22.772687 | orchestrator | 2025-06-22 20:23:21 | INFO  | Setting property image_original_user: ubuntu 2025-06-22 20:23:22.772700 | orchestrator | 2025-06-22 20:23:21 | INFO  | Setting property os_version: 2025-06-22 2025-06-22 20:23:22.772713 | orchestrator | 2025-06-22 20:23:21 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 20:23:22.772743 | orchestrator | 2025-06-22 20:23:22 | INFO  | Setting property image_build_date: 2025-06-22 2025-06-22 20:23:22.772757 | orchestrator | 2025-06-22 20:23:22 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 20:23:22.772777 | orchestrator | 2025-06-22 20:23:22 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 20:23:22.772789 | orchestrator | 2025-06-22 20:23:22 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-22 20:23:22.772801 | orchestrator | 2025-06-22 20:23:22 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-22 20:23:22.772815 | orchestrator | 2025-06-22 20:23:22 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-22 20:23:22.772827 | orchestrator | 2025-06-22 20:23:22 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-22 20:23:23.387641 | orchestrator | ok: Runtime: 0:03:01.489278 2025-06-22 20:23:23.451791 | 2025-06-22 20:23:23.451929 | TASK [Run checks] 2025-06-22 20:23:24.156987 | orchestrator | + set -e 2025-06-22 20:23:24.157165 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 20:23:24.157189 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 20:23:24.157210 | orchestrator | ++ INTERACTIVE=false 2025-06-22 20:23:24.157225 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 20:23:24.157238 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 20:23:24.157252 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-22 20:23:24.157810 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-22 20:23:24.162956 | orchestrator | 2025-06-22 20:23:24.163032 | orchestrator | # CHECK 2025-06-22 20:23:24.163048 | orchestrator | 2025-06-22 20:23:24.163061 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-22 20:23:24.163077 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-22 20:23:24.163089 | orchestrator | + echo 2025-06-22 20:23:24.163100 | orchestrator | + echo '# CHECK' 2025-06-22 20:23:24.163111 | orchestrator | + echo 2025-06-22 20:23:24.163126 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 20:23:24.164481 | orchestrator | ++ semver latest 5.0.0 2025-06-22 20:23:24.229244 | orchestrator | 2025-06-22 20:23:24.229347 | orchestrator | ## Containers @ testbed-manager 2025-06-22 20:23:24.229362 | orchestrator | 2025-06-22 20:23:24.229376 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-22 20:23:24.229387 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-22 20:23:24.229398 | orchestrator | + echo 2025-06-22 20:23:24.229411 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-22 20:23:24.229423 | orchestrator | + echo 2025-06-22 20:23:24.229434 | orchestrator | + osism container testbed-manager ps 2025-06-22 20:23:26.636875 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 20:23:26.637013 | orchestrator | b245fc5fc786 registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_blackbox_exporter 2025-06-22 20:23:26.637051 | orchestrator | 71a60575ede0 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_alertmanager 2025-06-22 20:23:26.637071 | orchestrator | 181766ead005 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-22 20:23:26.637083 | orchestrator | 35b6f80c2b55 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-06-22 20:23:26.637095 | orchestrator | 13d9ec8914da registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_server 2025-06-22 20:23:26.637112 | orchestrator | cdcaaa07a57e registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 18 minutes ago Up 17 minutes cephclient 2025-06-22 20:23:26.637124 | orchestrator | a093adfc3032 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes cron 2025-06-22 20:23:26.637136 | orchestrator | 37f681b3af3c registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-22 20:23:26.637148 | orchestrator | 7c25854c62c1 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes fluentd 2025-06-22 20:23:26.637186 | orchestrator | f1b701b9ae62 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2025-06-22 20:23:26.637199 | orchestrator | c0ae497d5425 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 31 minutes openstackclient 2025-06-22 20:23:26.637210 | orchestrator | eeb189e31387 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 32 minutes ago Up 31 minutes (healthy) 8080/tcp homer 2025-06-22 20:23:26.637222 | orchestrator | d432d49103cd registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 52 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-22 20:23:26.637234 | orchestrator | 11d44e7b1152 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 55 minutes ago Up 38 minutes (healthy) manager-inventory_reconciler-1 2025-06-22 20:23:26.637245 | orchestrator | aad43b42385e registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 55 minutes ago Up 38 minutes (healthy) osism-ansible 2025-06-22 20:23:26.637277 | orchestrator | 288cd7275965 registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 55 minutes ago Up 38 minutes (healthy) ceph-ansible 2025-06-22 20:23:26.637294 | orchestrator | 2911f56b26c7 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 55 minutes ago Up 38 minutes (healthy) osism-kubernetes 2025-06-22 20:23:26.637306 | orchestrator | 90fc327fd394 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 55 minutes ago Up 38 minutes (healthy) kolla-ansible 2025-06-22 20:23:26.637317 | orchestrator | dc7a99c971f6 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 55 minutes ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-22 20:23:26.637329 | orchestrator | 141c0baf5998 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 55 minutes ago Up 39 minutes (healthy) osismclient 2025-06-22 20:23:26.637340 | orchestrator | fc5abe8f4872 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 55 minutes ago Up 39 minutes (healthy) 6379/tcp manager-redis-1 2025-06-22 20:23:26.637351 | orchestrator | d1aeb87e4e65 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 39 minutes (healthy) manager-flower-1 2025-06-22 20:23:26.637362 | orchestrator | fc3c56a8cbdb registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 55 minutes ago Up 39 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-22 20:23:26.637373 | orchestrator | bed485c7e3c3 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 39 minutes (healthy) manager-openstack-1 2025-06-22 20:23:26.637392 | orchestrator | 01ac1c43dced registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 39 minutes (healthy) manager-listener-1 2025-06-22 20:23:26.637404 | orchestrator | f5e7724d0b06 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 39 minutes (healthy) manager-beat-1 2025-06-22 20:23:26.637415 | orchestrator | 7d9f91fb1167 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 55 minutes ago Up 39 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-22 20:23:26.637427 | orchestrator | 2a3719d1256e registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 57 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-22 20:23:26.892383 | orchestrator | 2025-06-22 20:23:26.892530 | orchestrator | ## Images @ testbed-manager 2025-06-22 20:23:26.892548 | orchestrator | 2025-06-22 20:23:26.892561 | orchestrator | + echo 2025-06-22 20:23:26.892573 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-22 20:23:26.892585 | orchestrator | + echo 2025-06-22 20:23:26.892596 | orchestrator | + osism container testbed-manager images 2025-06-22 20:23:28.947045 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 20:23:28.947173 | orchestrator | registry.osism.tech/osism/osism latest 2ecab9bf1a3b 4 hours ago 312MB 2025-06-22 20:23:28.947191 | orchestrator | registry.osism.tech/osism/homer v25.05.2 e2c78a28297e 17 hours ago 11.5MB 2025-06-22 20:23:28.947203 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 31eca7c9891c 17 hours ago 226MB 2025-06-22 20:23:28.947215 | orchestrator | registry.osism.tech/osism/cephclient reef d8e5299cdef6 17 hours ago 453MB 2025-06-22 20:23:28.947249 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 96840cd4db15 19 hours ago 628MB 2025-06-22 20:23:28.947260 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 25062cb03cf6 19 hours ago 746MB 2025-06-22 20:23:28.947271 | orchestrator | registry.osism.tech/kolla/cron 2024.2 c659d691ba62 19 hours ago 318MB 2025-06-22 20:23:28.947282 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 f8a5c9140814 19 hours ago 410MB 2025-06-22 20:23:28.947293 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 8741e449cbc0 19 hours ago 891MB 2025-06-22 20:23:28.947304 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 24a53a5e4997 19 hours ago 360MB 2025-06-22 20:23:28.947315 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 2bd1aa159bc9 19 hours ago 456MB 2025-06-22 20:23:28.947326 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 b71d4a4744db 19 hours ago 358MB 2025-06-22 20:23:28.947337 | orchestrator | registry.osism.tech/osism/osism-ansible latest e5e695b56f48 20 hours ago 577MB 2025-06-22 20:23:28.947347 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 25505a1bd01a 20 hours ago 574MB 2025-06-22 20:23:28.947359 | orchestrator | registry.osism.tech/osism/ceph-ansible reef 1a7abcd86bfe 20 hours ago 537MB 2025-06-22 20:23:28.947369 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest bbe72b03a80c 20 hours ago 1.21GB 2025-06-22 20:23:28.947380 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 3a88ef1f1565 20 hours ago 310MB 2025-06-22 20:23:28.947414 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 3 weeks ago 41.4MB 2025-06-22 20:23:28.947425 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 3 weeks ago 224MB 2025-06-22 20:23:28.947436 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 6b3ebe9793bb 4 months ago 328MB 2025-06-22 20:23:28.947477 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-22 20:23:28.947488 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-22 20:23:28.947499 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 12 months ago 146MB 2025-06-22 20:23:29.213012 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 20:23:29.213133 | orchestrator | ++ semver latest 5.0.0 2025-06-22 20:23:29.263376 | orchestrator | 2025-06-22 20:23:29.263493 | orchestrator | ## Containers @ testbed-node-0 2025-06-22 20:23:29.263509 | orchestrator | 2025-06-22 20:23:29.263521 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-22 20:23:29.263532 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-22 20:23:29.263543 | orchestrator | + echo 2025-06-22 20:23:29.263554 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-22 20:23:29.263566 | orchestrator | + echo 2025-06-22 20:23:29.263577 | orchestrator | + osism container testbed-node-0 ps 2025-06-22 20:23:31.465578 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 20:23:31.465680 | orchestrator | 68eded942ec4 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-22 20:23:31.465698 | orchestrator | ca08a6f03a9f registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-22 20:23:31.465711 | orchestrator | 280f9fe8e5d8 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-22 20:23:31.465722 | orchestrator | 332cb766cacc registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-22 20:23:31.465733 | orchestrator | 91beffd1563e registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-06-22 20:23:31.465744 | orchestrator | 9b7c38f6c9d2 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-06-22 20:23:31.465755 | orchestrator | 2dd642f15c1c registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-22 20:23:31.465785 | orchestrator | 332d762d70f2 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-22 20:23:31.465796 | orchestrator | c4898a17641b registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2025-06-22 20:23:31.465808 | orchestrator | b4bd4ef2e475 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-22 20:23:31.465819 | orchestrator | 97faf02bc93b registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-06-22 20:23:31.465830 | orchestrator | 4734ebf85aa8 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-06-22 20:23:31.465861 | orchestrator | da541810f877 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-06-22 20:23:31.465873 | orchestrator | 7e18420d61cb registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2025-06-22 20:23:31.465884 | orchestrator | 994cd44a087a registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-22 20:23:31.465894 | orchestrator | 87b3e0aebf92 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-22 20:23:31.465906 | orchestrator | f3a341e090c2 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-06-22 20:23:31.465917 | orchestrator | 8b8c8bd17da6 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-06-22 20:23:31.465927 | orchestrator | f4ac7f31f7ad registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-22 20:23:31.465938 | orchestrator | 00a358cd039d registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-22 20:23:31.465949 | orchestrator | 8d8a0d10642b registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-22 20:23:31.465982 | orchestrator | 84a67a273ae9 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-22 20:23:31.465994 | orchestrator | 72180b3e0c4f registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-06-22 20:23:31.466005 | orchestrator | 8b75d705773d registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-06-22 20:23:31.466091 | orchestrator | 2dc798a096b8 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) barbican_keystone_listener 2025-06-22 20:23:31.466108 | orchestrator | 6279afcfb1c7 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-22 20:23:31.466125 | orchestrator | 007272e5523f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-06-22 20:23:31.466136 | orchestrator | 52db6fbdce0d registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-22 20:23:31.466152 | orchestrator | c62f12a2bb12 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-22 20:23:31.466163 | orchestrator | da9ff9993d5d registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-22 20:23:31.466174 | orchestrator | d769d7568899 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-06-22 20:23:31.466185 | orchestrator | 391e9557c060 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-06-22 20:23:31.466204 | orchestrator | b9733c633ae3 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-06-22 20:23:31.466216 | orchestrator | 31094b8c2894 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-06-22 20:23:31.466226 | orchestrator | bb519f911c89 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-06-22 20:23:31.466237 | orchestrator | 07c2f512baca registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-22 20:23:31.466248 | orchestrator | e12ec7d044d5 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-22 20:23:31.466259 | orchestrator | 1868de4b1138 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-22 20:23:31.466270 | orchestrator | c5996983c6d8 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-06-22 20:23:31.466281 | orchestrator | ce4f7d4f92e6 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-06-22 20:23:31.466292 | orchestrator | 14216744b07c registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-06-22 20:23:31.466303 | orchestrator | 2a07e963a853 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-22 20:23:31.466314 | orchestrator | d9bf42eafe53 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-06-22 20:23:31.466325 | orchestrator | 0bea53b8c272 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-22 20:23:31.466346 | orchestrator | c474916cfa3e registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-22 20:23:31.466358 | orchestrator | 23963c308b47 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-22 20:23:31.466369 | orchestrator | 373ae74d4097 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-22 20:23:31.466380 | orchestrator | 11fc1c392fac registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-22 20:23:31.466391 | orchestrator | 0f12695f8177 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-22 20:23:31.466402 | orchestrator | 7e4795b5f31d registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-22 20:23:31.466413 | orchestrator | 52b8efe6b265 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-22 20:23:31.466429 | orchestrator | 1c6f9127cd2f registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-22 20:23:31.726519 | orchestrator | 2025-06-22 20:23:31.726647 | orchestrator | ## Images @ testbed-node-0 2025-06-22 20:23:31.726664 | orchestrator | 2025-06-22 20:23:31.726676 | orchestrator | + echo 2025-06-22 20:23:31.726688 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-22 20:23:31.726700 | orchestrator | + echo 2025-06-22 20:23:31.726711 | orchestrator | + osism container testbed-node-0 images 2025-06-22 20:23:33.859511 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 20:23:33.859609 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 2cf27af39265 17 hours ago 1.27GB 2025-06-22 20:23:33.859621 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 17e4e32cecb2 19 hours ago 329MB 2025-06-22 20:23:33.859629 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 25062cb03cf6 19 hours ago 746MB 2025-06-22 20:23:33.859636 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 96840cd4db15 19 hours ago 628MB 2025-06-22 20:23:33.859644 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d42572f0d670 19 hours ago 417MB 2025-06-22 20:23:33.859651 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 abfeaab337df 19 hours ago 375MB 2025-06-22 20:23:33.859658 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 fb013c345946 19 hours ago 1.01GB 2025-06-22 20:23:33.859666 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 39ba0dbdb98f 19 hours ago 326MB 2025-06-22 20:23:33.859673 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 1c551368b0e4 19 hours ago 1.55GB 2025-06-22 20:23:33.859680 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 a3c6da290bf0 19 hours ago 1.59GB 2025-06-22 20:23:33.859688 | orchestrator | registry.osism.tech/kolla/cron 2024.2 c659d691ba62 19 hours ago 318MB 2025-06-22 20:23:33.859695 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 670bc290a895 19 hours ago 318MB 2025-06-22 20:23:33.859702 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 f8a5c9140814 19 hours ago 410MB 2025-06-22 20:23:33.859710 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 6d2e272aef33 19 hours ago 351MB 2025-06-22 20:23:33.859717 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 d9afceed19bb 19 hours ago 353MB 2025-06-22 20:23:33.859724 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 bc354ed445de 19 hours ago 344MB 2025-06-22 20:23:33.859919 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 b71d4a4744db 19 hours ago 358MB 2025-06-22 20:23:33.859931 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 cfc5e31115f8 19 hours ago 1.21GB 2025-06-22 20:23:33.859952 | orchestrator | registry.osism.tech/kolla/redis 2024.2 96628e7d3dc2 19 hours ago 324MB 2025-06-22 20:23:33.859960 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 9c58973ddd03 19 hours ago 324MB 2025-06-22 20:23:33.859967 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 1e9b56ff51db 19 hours ago 590MB 2025-06-22 20:23:33.859994 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 5cfcb423ec41 19 hours ago 361MB 2025-06-22 20:23:33.860002 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 c1952fcbe2c1 19 hours ago 361MB 2025-06-22 20:23:33.860632 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 6964665c425e 19 hours ago 1.04GB 2025-06-22 20:23:33.860653 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 70aa09d112ce 19 hours ago 1.04GB 2025-06-22 20:23:33.860665 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 a4b104bb7186 19 hours ago 1.04GB 2025-06-22 20:23:33.860700 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 e8f59e5bcbcc 19 hours ago 1.04GB 2025-06-22 20:23:33.860712 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 a55cf8fe69f6 19 hours ago 1.41GB 2025-06-22 20:23:33.860739 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 a8fbd9b056a9 19 hours ago 1.41GB 2025-06-22 20:23:33.860753 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 b2f731aee8c3 19 hours ago 1.24GB 2025-06-22 20:23:33.860765 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 7b8d97aece3e 19 hours ago 1.13GB 2025-06-22 20:23:33.860777 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 276e3f7550e4 19 hours ago 1.11GB 2025-06-22 20:23:33.860789 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 98b1944e9d2c 19 hours ago 1.11GB 2025-06-22 20:23:33.860800 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 8b834d79dcf0 19 hours ago 1.04GB 2025-06-22 20:23:33.860812 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 8a70cb5469ed 19 hours ago 1.04GB 2025-06-22 20:23:33.860839 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 f54031146bb9 19 hours ago 1.15GB 2025-06-22 20:23:33.860852 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 1beeeb46970e 19 hours ago 1.42GB 2025-06-22 20:23:33.862301 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 042dad162fd3 19 hours ago 1.29GB 2025-06-22 20:23:33.862337 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 47f8fa45e273 19 hours ago 1.29GB 2025-06-22 20:23:33.862349 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 b0da04c453c9 19 hours ago 1.29GB 2025-06-22 20:23:33.862361 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 b96bfb233df5 19 hours ago 1.11GB 2025-06-22 20:23:33.862372 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 ecef90027b66 19 hours ago 1.11GB 2025-06-22 20:23:33.862383 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 84268b46ab17 19 hours ago 1.06GB 2025-06-22 20:23:33.862396 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 77e03fa39ed2 19 hours ago 1.06GB 2025-06-22 20:23:33.862409 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 f69b0e4ffa3b 19 hours ago 1.06GB 2025-06-22 20:23:33.862420 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 9071c45dd565 19 hours ago 1.1GB 2025-06-22 20:23:33.862432 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 1639a28ea30b 19 hours ago 1.12GB 2025-06-22 20:23:33.862443 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 5dbfa836b524 19 hours ago 1.1GB 2025-06-22 20:23:33.862483 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 80c2b253cea2 19 hours ago 1.1GB 2025-06-22 20:23:33.862495 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a70622aef683 19 hours ago 1.12GB 2025-06-22 20:23:33.862506 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 416e4b8651cd 19 hours ago 1.06GB 2025-06-22 20:23:33.862517 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 92fc18d3a37f 19 hours ago 1.05GB 2025-06-22 20:23:33.862528 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 53820dca1330 19 hours ago 1.06GB 2025-06-22 20:23:33.862540 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 dc5ba2112c04 19 hours ago 1.05GB 2025-06-22 20:23:33.862552 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 425f7d98d0a6 19 hours ago 1.05GB 2025-06-22 20:23:33.862583 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 a83c95d8ebf1 19 hours ago 1.05GB 2025-06-22 20:23:33.862609 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 aaaba25167ef 19 hours ago 1.2GB 2025-06-22 20:23:33.862622 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 4fcbd97b5d21 19 hours ago 1.31GB 2025-06-22 20:23:33.862634 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 175ba5ac3915 19 hours ago 1.04GB 2025-06-22 20:23:33.862646 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 751bd76c7350 19 hours ago 947MB 2025-06-22 20:23:33.862658 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1eb9c225409b 19 hours ago 946MB 2025-06-22 20:23:33.862670 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 fd1856824c21 19 hours ago 946MB 2025-06-22 20:23:33.862683 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ff45ae632b5a 19 hours ago 947MB 2025-06-22 20:23:34.117303 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 20:23:34.117433 | orchestrator | ++ semver latest 5.0.0 2025-06-22 20:23:34.171405 | orchestrator | 2025-06-22 20:23:34.171550 | orchestrator | ## Containers @ testbed-node-1 2025-06-22 20:23:34.171567 | orchestrator | 2025-06-22 20:23:34.171579 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-22 20:23:34.171591 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-22 20:23:34.171602 | orchestrator | + echo 2025-06-22 20:23:34.171613 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-22 20:23:34.171625 | orchestrator | + echo 2025-06-22 20:23:34.171636 | orchestrator | + osism container testbed-node-1 ps 2025-06-22 20:23:36.372164 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 20:23:36.372237 | orchestrator | 4caa4a970f00 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-22 20:23:36.372245 | orchestrator | bfbca791987a registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-22 20:23:36.372250 | orchestrator | 9235a6716dbe registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 7 minutes grafana 2025-06-22 20:23:36.372254 | orchestrator | bd72f64d2ff0 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-22 20:23:36.373175 | orchestrator | facdd77a0a64 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-22 20:23:36.373198 | orchestrator | be6ad95900c3 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) cinder_scheduler 2025-06-22 20:23:36.373203 | orchestrator | e42dc5ec1c17 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-22 20:23:36.373212 | orchestrator | 847c3cafc1b9 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-22 20:23:36.373216 | orchestrator | 9e20e46f8648 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-22 20:23:36.373221 | orchestrator | 31290626e1c6 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-22 20:23:36.373225 | orchestrator | 389a107b44b9 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-06-22 20:23:36.373241 | orchestrator | eae70e9d8b0d registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-06-22 20:23:36.373245 | orchestrator | 2ea9d30bafaa registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-06-22 20:23:36.373249 | orchestrator | bb7758ae0345 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2025-06-22 20:23:36.373252 | orchestrator | ea1f51dd167e registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-22 20:23:36.373256 | orchestrator | a11af6a5e7c6 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-22 20:23:36.373260 | orchestrator | bddf3a000c94 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-06-22 20:23:36.373263 | orchestrator | a815f220c6ca registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-06-22 20:23:36.373267 | orchestrator | 0d9c3f3c0de4 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-22 20:23:36.373271 | orchestrator | 5783e5105040 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-22 20:23:36.373277 | orchestrator | a90721a9b735 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-22 20:23:36.373281 | orchestrator | 83fe413f1c72 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-22 20:23:36.373285 | orchestrator | 1df49e58c190 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-06-22 20:23:36.373288 | orchestrator | 1984d1b9d8bf registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) barbican_worker 2025-06-22 20:23:36.373292 | orchestrator | b80bed207071 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-06-22 20:23:36.373296 | orchestrator | c32a6d578e1a registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-22 20:23:36.373300 | orchestrator | e7599a54fee2 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-06-22 20:23:36.373311 | orchestrator | 48a5806de0db registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-22 20:23:36.373317 | orchestrator | f101e804ebb6 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-22 20:23:36.373321 | orchestrator | 7a48311a56ca registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-22 20:23:36.373327 | orchestrator | 20efce618aba registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-22 20:23:36.373331 | orchestrator | 86800cd151d6 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-22 20:23:36.373335 | orchestrator | 8a9468bec447 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-06-22 20:23:36.373339 | orchestrator | 8b0708c124a3 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-22 20:23:36.373343 | orchestrator | cf5e69fa2317 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-06-22 20:23:36.373346 | orchestrator | 7512578f6bba registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-22 20:23:36.373350 | orchestrator | 9e96775e8637 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-22 20:23:36.373354 | orchestrator | a8ea8c9761f3 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-22 20:23:36.373358 | orchestrator | 186f51f3e0dc registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-06-22 20:23:36.373361 | orchestrator | f316dfa625a6 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-06-22 20:23:36.373365 | orchestrator | 072c1940bf28 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-06-22 20:23:36.373369 | orchestrator | d6a9c3abea1a registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-22 20:23:36.373372 | orchestrator | 8674c3f9dc4e registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-22 20:23:36.373376 | orchestrator | 193df39412e1 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-06-22 20:23:36.373380 | orchestrator | 18356ed9d6da registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-22 20:23:36.373384 | orchestrator | 6b9be11481d5 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-22 20:23:36.373387 | orchestrator | ca1b6cd7fcb6 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-22 20:23:36.373391 | orchestrator | eeaedb705ba7 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-22 20:23:36.373395 | orchestrator | f52d6201432e registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-22 20:23:36.373399 | orchestrator | 33a31d265a79 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-22 20:23:36.373407 | orchestrator | d01236cf2672 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-22 20:23:36.373410 | orchestrator | 7479c35edda1 registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-22 20:23:36.629804 | orchestrator | 2025-06-22 20:23:36.629882 | orchestrator | ## Images @ testbed-node-1 2025-06-22 20:23:36.629891 | orchestrator | 2025-06-22 20:23:36.629898 | orchestrator | + echo 2025-06-22 20:23:36.629905 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-22 20:23:36.629913 | orchestrator | + echo 2025-06-22 20:23:36.629937 | orchestrator | + osism container testbed-node-1 images 2025-06-22 20:23:38.767875 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 20:23:38.767970 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 2cf27af39265 17 hours ago 1.27GB 2025-06-22 20:23:38.767983 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 17e4e32cecb2 19 hours ago 329MB 2025-06-22 20:23:38.767993 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 25062cb03cf6 19 hours ago 746MB 2025-06-22 20:23:38.768003 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 96840cd4db15 19 hours ago 628MB 2025-06-22 20:23:38.768013 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d42572f0d670 19 hours ago 417MB 2025-06-22 20:23:38.768023 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 fb013c345946 19 hours ago 1.01GB 2025-06-22 20:23:38.768032 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 abfeaab337df 19 hours ago 375MB 2025-06-22 20:23:38.768042 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 39ba0dbdb98f 19 hours ago 326MB 2025-06-22 20:23:38.768052 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 1c551368b0e4 19 hours ago 1.55GB 2025-06-22 20:23:38.768061 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 a3c6da290bf0 19 hours ago 1.59GB 2025-06-22 20:23:38.768071 | orchestrator | registry.osism.tech/kolla/cron 2024.2 c659d691ba62 19 hours ago 318MB 2025-06-22 20:23:38.768080 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 670bc290a895 19 hours ago 318MB 2025-06-22 20:23:38.768090 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 f8a5c9140814 19 hours ago 410MB 2025-06-22 20:23:38.768100 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 6d2e272aef33 19 hours ago 351MB 2025-06-22 20:23:38.768110 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 d9afceed19bb 19 hours ago 353MB 2025-06-22 20:23:38.768119 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 bc354ed445de 19 hours ago 344MB 2025-06-22 20:23:38.768129 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 b71d4a4744db 19 hours ago 358MB 2025-06-22 20:23:38.768138 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 cfc5e31115f8 19 hours ago 1.21GB 2025-06-22 20:23:38.768148 | orchestrator | registry.osism.tech/kolla/redis 2024.2 96628e7d3dc2 19 hours ago 324MB 2025-06-22 20:23:38.768158 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 9c58973ddd03 19 hours ago 324MB 2025-06-22 20:23:38.768168 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 1e9b56ff51db 19 hours ago 590MB 2025-06-22 20:23:38.768177 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 5cfcb423ec41 19 hours ago 361MB 2025-06-22 20:23:38.768187 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 c1952fcbe2c1 19 hours ago 361MB 2025-06-22 20:23:38.768197 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 a55cf8fe69f6 19 hours ago 1.41GB 2025-06-22 20:23:38.768227 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 a8fbd9b056a9 19 hours ago 1.41GB 2025-06-22 20:23:38.768238 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 b2f731aee8c3 19 hours ago 1.24GB 2025-06-22 20:23:38.768248 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 7b8d97aece3e 19 hours ago 1.13GB 2025-06-22 20:23:38.768257 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 276e3f7550e4 19 hours ago 1.11GB 2025-06-22 20:23:38.768267 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 98b1944e9d2c 19 hours ago 1.11GB 2025-06-22 20:23:38.768276 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 f54031146bb9 19 hours ago 1.15GB 2025-06-22 20:23:38.768286 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 1beeeb46970e 19 hours ago 1.42GB 2025-06-22 20:23:38.768295 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 042dad162fd3 19 hours ago 1.29GB 2025-06-22 20:23:38.768305 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 47f8fa45e273 19 hours ago 1.29GB 2025-06-22 20:23:38.768314 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 b0da04c453c9 19 hours ago 1.29GB 2025-06-22 20:23:38.768324 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 84268b46ab17 19 hours ago 1.06GB 2025-06-22 20:23:38.768333 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 77e03fa39ed2 19 hours ago 1.06GB 2025-06-22 20:23:38.768358 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 f69b0e4ffa3b 19 hours ago 1.06GB 2025-06-22 20:23:38.768368 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 416e4b8651cd 19 hours ago 1.06GB 2025-06-22 20:23:38.768378 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 92fc18d3a37f 19 hours ago 1.05GB 2025-06-22 20:23:38.768387 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 53820dca1330 19 hours ago 1.06GB 2025-06-22 20:23:38.768397 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 dc5ba2112c04 19 hours ago 1.05GB 2025-06-22 20:23:38.768407 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 425f7d98d0a6 19 hours ago 1.05GB 2025-06-22 20:23:38.768436 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 a83c95d8ebf1 19 hours ago 1.05GB 2025-06-22 20:23:38.768477 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 aaaba25167ef 19 hours ago 1.2GB 2025-06-22 20:23:38.768490 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 4fcbd97b5d21 19 hours ago 1.31GB 2025-06-22 20:23:38.768501 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 175ba5ac3915 19 hours ago 1.04GB 2025-06-22 20:23:38.768512 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 751bd76c7350 19 hours ago 947MB 2025-06-22 20:23:38.768523 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1eb9c225409b 19 hours ago 946MB 2025-06-22 20:23:38.768533 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ff45ae632b5a 19 hours ago 947MB 2025-06-22 20:23:38.768545 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 fd1856824c21 19 hours ago 946MB 2025-06-22 20:23:39.015887 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 20:23:39.016074 | orchestrator | ++ semver latest 5.0.0 2025-06-22 20:23:39.051985 | orchestrator | 2025-06-22 20:23:39.052082 | orchestrator | ## Containers @ testbed-node-2 2025-06-22 20:23:39.052097 | orchestrator | 2025-06-22 20:23:39.052109 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-22 20:23:39.052121 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-22 20:23:39.052132 | orchestrator | + echo 2025-06-22 20:23:39.052168 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-22 20:23:39.052182 | orchestrator | + echo 2025-06-22 20:23:39.052193 | orchestrator | + osism container testbed-node-2 ps 2025-06-22 20:23:41.235276 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 20:23:41.235378 | orchestrator | 6aa802f36ee5 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-22 20:23:41.235394 | orchestrator | f447d690d136 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-22 20:23:41.235406 | orchestrator | 282e1a675776 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-22 20:23:41.235425 | orchestrator | 416678ba565e registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-22 20:23:41.235443 | orchestrator | cdac08aaf923 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-22 20:23:41.235508 | orchestrator | 6bf94f7ece1d registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-22 20:23:41.235525 | orchestrator | 61d5af49eca5 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-22 20:23:41.235541 | orchestrator | 1a68a71a5117 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-22 20:23:41.235557 | orchestrator | cb5138f46e01 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_elasticsearch_exporter 2025-06-22 20:23:41.235573 | orchestrator | b7fdd25f19fa registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-22 20:23:41.235590 | orchestrator | b75ea6db537c registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-06-22 20:23:41.235608 | orchestrator | c2b684d328b8 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-06-22 20:23:41.235625 | orchestrator | 91c8a085eff9 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-06-22 20:23:41.235641 | orchestrator | cfdb6ac6036c registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) magnum_conductor 2025-06-22 20:23:41.235657 | orchestrator | ee5061aa44b1 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-22 20:23:41.235675 | orchestrator | 4f7dfd47ad60 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-22 20:23:41.235693 | orchestrator | b3f687d7a495 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-06-22 20:23:41.235712 | orchestrator | 0600b63792a4 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-06-22 20:23:41.235756 | orchestrator | c177357329c5 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-22 20:23:41.235768 | orchestrator | 2ae623dc7f33 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-22 20:23:41.235779 | orchestrator | 9ae8e353996f registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-22 20:23:41.235810 | orchestrator | 8c5838e1e614 registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-22 20:23:41.235841 | orchestrator | 861945fa9674 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_backend_bind9 2025-06-22 20:23:41.235854 | orchestrator | a7e25cce9da5 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) barbican_worker 2025-06-22 20:23:41.235866 | orchestrator | c4013e6a3370 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-06-22 20:23:41.235878 | orchestrator | 14b1f935e92f registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-06-22 20:23:41.235891 | orchestrator | 74f835eeb2e9 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-22 20:23:41.235903 | orchestrator | 781df7675a8e registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-22 20:23:41.235915 | orchestrator | 0accbb5969b3 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-06-22 20:23:41.235932 | orchestrator | 51bca529e7e5 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-22 20:23:41.235950 | orchestrator | 8aa64bd6839f registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-22 20:23:41.235968 | orchestrator | 0c9978e939f4 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-22 20:23:41.235985 | orchestrator | c7902942d218 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-22 20:23:41.236003 | orchestrator | c0e0250e4154 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-22 20:23:41.236022 | orchestrator | be7705a851a8 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-06-22 20:23:41.236040 | orchestrator | ec00b325a8fb registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-22 20:23:41.236059 | orchestrator | e6878f4d614d registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-22 20:23:41.236077 | orchestrator | 6d0cadd3abb8 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-22 20:23:41.236108 | orchestrator | a1cec507cba1 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-06-22 20:23:41.236127 | orchestrator | 1769b71ab7ed registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-06-22 20:23:41.236146 | orchestrator | 210845952ec7 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-06-22 20:23:41.236164 | orchestrator | b6a2cf201e02 registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-22 20:23:41.236183 | orchestrator | bde1cbbfe221 registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-22 20:23:41.236202 | orchestrator | 12f76fe8576d registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-06-22 20:23:41.236233 | orchestrator | 176bccda3efd registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-22 20:23:41.236254 | orchestrator | 355f65f633da registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-22 20:23:41.236273 | orchestrator | e556e3c132d0 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-22 20:23:41.236293 | orchestrator | a90470231cb1 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-22 20:23:41.236311 | orchestrator | 1106001c36fb registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-22 20:23:41.236330 | orchestrator | 4769075a6cf3 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-22 20:23:41.236348 | orchestrator | e8c20b64c577 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-22 20:23:41.236366 | orchestrator | 000e3eec60ef registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-22 20:23:41.491915 | orchestrator | 2025-06-22 20:23:41.492029 | orchestrator | ## Images @ testbed-node-2 2025-06-22 20:23:41.492046 | orchestrator | 2025-06-22 20:23:41.492059 | orchestrator | + echo 2025-06-22 20:23:41.492071 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-22 20:23:41.492083 | orchestrator | + echo 2025-06-22 20:23:41.492095 | orchestrator | + osism container testbed-node-2 images 2025-06-22 20:23:43.599801 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 20:23:43.599911 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 2cf27af39265 17 hours ago 1.27GB 2025-06-22 20:23:43.599926 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 17e4e32cecb2 19 hours ago 329MB 2025-06-22 20:23:43.599938 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 25062cb03cf6 19 hours ago 746MB 2025-06-22 20:23:43.599949 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 96840cd4db15 19 hours ago 628MB 2025-06-22 20:23:43.599960 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d42572f0d670 19 hours ago 417MB 2025-06-22 20:23:43.600065 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 abfeaab337df 19 hours ago 375MB 2025-06-22 20:23:43.600080 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 fb013c345946 19 hours ago 1.01GB 2025-06-22 20:23:43.600091 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 39ba0dbdb98f 19 hours ago 326MB 2025-06-22 20:23:43.600102 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 1c551368b0e4 19 hours ago 1.55GB 2025-06-22 20:23:43.600113 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 a3c6da290bf0 19 hours ago 1.59GB 2025-06-22 20:23:43.600124 | orchestrator | registry.osism.tech/kolla/cron 2024.2 c659d691ba62 19 hours ago 318MB 2025-06-22 20:23:43.600136 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 670bc290a895 19 hours ago 318MB 2025-06-22 20:23:43.600147 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 f8a5c9140814 19 hours ago 410MB 2025-06-22 20:23:43.600163 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 6d2e272aef33 19 hours ago 351MB 2025-06-22 20:23:43.600174 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 d9afceed19bb 19 hours ago 353MB 2025-06-22 20:23:43.600185 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 bc354ed445de 19 hours ago 344MB 2025-06-22 20:23:43.600196 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 b71d4a4744db 19 hours ago 358MB 2025-06-22 20:23:43.600207 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 cfc5e31115f8 19 hours ago 1.21GB 2025-06-22 20:23:43.600218 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 9c58973ddd03 19 hours ago 324MB 2025-06-22 20:23:43.600229 | orchestrator | registry.osism.tech/kolla/redis 2024.2 96628e7d3dc2 19 hours ago 324MB 2025-06-22 20:23:43.600240 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 1e9b56ff51db 19 hours ago 590MB 2025-06-22 20:23:43.600251 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 5cfcb423ec41 19 hours ago 361MB 2025-06-22 20:23:43.600261 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 c1952fcbe2c1 19 hours ago 361MB 2025-06-22 20:23:43.600272 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 a55cf8fe69f6 19 hours ago 1.41GB 2025-06-22 20:23:43.600283 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 a8fbd9b056a9 19 hours ago 1.41GB 2025-06-22 20:23:43.600294 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 b2f731aee8c3 19 hours ago 1.24GB 2025-06-22 20:23:43.600305 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 7b8d97aece3e 19 hours ago 1.13GB 2025-06-22 20:23:43.600316 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 276e3f7550e4 19 hours ago 1.11GB 2025-06-22 20:23:43.600327 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 98b1944e9d2c 19 hours ago 1.11GB 2025-06-22 20:23:43.600339 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 f54031146bb9 19 hours ago 1.15GB 2025-06-22 20:23:43.600353 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 1beeeb46970e 19 hours ago 1.42GB 2025-06-22 20:23:43.600365 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 042dad162fd3 19 hours ago 1.29GB 2025-06-22 20:23:43.600377 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 47f8fa45e273 19 hours ago 1.29GB 2025-06-22 20:23:43.600390 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 b0da04c453c9 19 hours ago 1.29GB 2025-06-22 20:23:43.600402 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 84268b46ab17 19 hours ago 1.06GB 2025-06-22 20:23:43.600422 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 77e03fa39ed2 19 hours ago 1.06GB 2025-06-22 20:23:43.600477 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 f69b0e4ffa3b 19 hours ago 1.06GB 2025-06-22 20:23:43.600491 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 416e4b8651cd 19 hours ago 1.06GB 2025-06-22 20:23:43.600505 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 92fc18d3a37f 19 hours ago 1.05GB 2025-06-22 20:23:43.600517 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 53820dca1330 19 hours ago 1.06GB 2025-06-22 20:23:43.600530 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 dc5ba2112c04 19 hours ago 1.05GB 2025-06-22 20:23:43.600542 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 425f7d98d0a6 19 hours ago 1.05GB 2025-06-22 20:23:43.600554 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 a83c95d8ebf1 19 hours ago 1.05GB 2025-06-22 20:23:43.600567 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 aaaba25167ef 19 hours ago 1.2GB 2025-06-22 20:23:43.600579 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 4fcbd97b5d21 19 hours ago 1.31GB 2025-06-22 20:23:43.600592 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 175ba5ac3915 19 hours ago 1.04GB 2025-06-22 20:23:43.600604 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 751bd76c7350 19 hours ago 947MB 2025-06-22 20:23:43.600616 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 1eb9c225409b 19 hours ago 946MB 2025-06-22 20:23:43.600629 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 fd1856824c21 19 hours ago 946MB 2025-06-22 20:23:43.600641 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 ff45ae632b5a 19 hours ago 947MB 2025-06-22 20:23:43.865784 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-22 20:23:43.871342 | orchestrator | + set -e 2025-06-22 20:23:43.871421 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 20:23:43.872380 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 20:23:43.872405 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 20:23:43.872416 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 20:23:43.872427 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 20:23:43.872438 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 20:23:43.872449 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 20:23:43.872493 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-22 20:23:43.872503 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-22 20:23:43.872514 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 20:23:43.872525 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 20:23:43.872536 | orchestrator | ++ export ARA=false 2025-06-22 20:23:43.872547 | orchestrator | ++ ARA=false 2025-06-22 20:23:43.872558 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 20:23:43.872569 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 20:23:43.872579 | orchestrator | ++ export TEMPEST=false 2025-06-22 20:23:43.872590 | orchestrator | ++ TEMPEST=false 2025-06-22 20:23:43.872600 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 20:23:43.872611 | orchestrator | ++ IS_ZUUL=true 2025-06-22 20:23:43.872621 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.19 2025-06-22 20:23:43.872632 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.19 2025-06-22 20:23:43.872657 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 20:23:43.872667 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 20:23:43.872678 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 20:23:43.872688 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 20:23:43.872699 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 20:23:43.872710 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 20:23:43.872720 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 20:23:43.872731 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 20:23:43.872741 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-22 20:23:43.872753 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-22 20:23:43.883380 | orchestrator | + set -e 2025-06-22 20:23:43.883513 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 20:23:43.883527 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 20:23:43.883539 | orchestrator | ++ INTERACTIVE=false 2025-06-22 20:23:43.883550 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 20:23:43.883561 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 20:23:43.883572 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-22 20:23:43.884680 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-22 20:23:43.887541 | orchestrator | 2025-06-22 20:23:43.887566 | orchestrator | # Ceph status 2025-06-22 20:23:43.887583 | orchestrator | 2025-06-22 20:23:43.887595 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-22 20:23:43.887607 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-22 20:23:43.887618 | orchestrator | + echo 2025-06-22 20:23:43.887629 | orchestrator | + echo '# Ceph status' 2025-06-22 20:23:43.887640 | orchestrator | + echo 2025-06-22 20:23:43.887651 | orchestrator | + ceph -s 2025-06-22 20:23:44.481168 | orchestrator | cluster: 2025-06-22 20:23:44.482257 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-22 20:23:44.482298 | orchestrator | health: HEALTH_OK 2025-06-22 20:23:44.482312 | orchestrator | 2025-06-22 20:23:44.482324 | orchestrator | services: 2025-06-22 20:23:44.482336 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-06-22 20:23:44.482349 | orchestrator | mgr: testbed-node-2(active, since 15m), standbys: testbed-node-1, testbed-node-0 2025-06-22 20:23:44.482376 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-22 20:23:44.482397 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 24m) 2025-06-22 20:23:44.482409 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-22 20:23:44.482420 | orchestrator | 2025-06-22 20:23:44.482431 | orchestrator | data: 2025-06-22 20:23:44.482441 | orchestrator | volumes: 1/1 healthy 2025-06-22 20:23:44.482481 | orchestrator | pools: 14 pools, 401 pgs 2025-06-22 20:23:44.482494 | orchestrator | objects: 523 objects, 2.2 GiB 2025-06-22 20:23:44.482528 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-22 20:23:44.482540 | orchestrator | pgs: 401 active+clean 2025-06-22 20:23:44.482550 | orchestrator | 2025-06-22 20:23:44.520021 | orchestrator | 2025-06-22 20:23:44.520102 | orchestrator | # Ceph versions 2025-06-22 20:23:44.520115 | orchestrator | 2025-06-22 20:23:44.520127 | orchestrator | + echo 2025-06-22 20:23:44.520138 | orchestrator | + echo '# Ceph versions' 2025-06-22 20:23:44.520149 | orchestrator | + echo 2025-06-22 20:23:44.520160 | orchestrator | + ceph versions 2025-06-22 20:23:45.081419 | orchestrator | { 2025-06-22 20:23:45.081571 | orchestrator | "mon": { 2025-06-22 20:23:45.081589 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 20:23:45.081602 | orchestrator | }, 2025-06-22 20:23:45.081613 | orchestrator | "mgr": { 2025-06-22 20:23:45.081624 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 20:23:45.081635 | orchestrator | }, 2025-06-22 20:23:45.081646 | orchestrator | "osd": { 2025-06-22 20:23:45.081658 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-22 20:23:45.081669 | orchestrator | }, 2025-06-22 20:23:45.081680 | orchestrator | "mds": { 2025-06-22 20:23:45.081691 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 20:23:45.081702 | orchestrator | }, 2025-06-22 20:23:45.081713 | orchestrator | "rgw": { 2025-06-22 20:23:45.081724 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 20:23:45.081735 | orchestrator | }, 2025-06-22 20:23:45.081746 | orchestrator | "overall": { 2025-06-22 20:23:45.081757 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-22 20:23:45.081769 | orchestrator | } 2025-06-22 20:23:45.081780 | orchestrator | } 2025-06-22 20:23:45.125282 | orchestrator | 2025-06-22 20:23:45.125361 | orchestrator | # Ceph OSD tree 2025-06-22 20:23:45.125368 | orchestrator | 2025-06-22 20:23:45.125373 | orchestrator | + echo 2025-06-22 20:23:45.125378 | orchestrator | + echo '# Ceph OSD tree' 2025-06-22 20:23:45.125383 | orchestrator | + echo 2025-06-22 20:23:45.125387 | orchestrator | + ceph osd df tree 2025-06-22 20:23:45.629661 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-22 20:23:45.629743 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-06-22 20:23:45.629772 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-22 20:23:45.629776 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.5 GiB 1 KiB 70 MiB 18 GiB 7.74 1.31 200 up osd.0 2025-06-22 20:23:45.629781 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 836 MiB 763 MiB 1 KiB 74 MiB 19 GiB 4.09 0.69 190 up osd.4 2025-06-22 20:23:45.629785 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-22 20:23:45.629789 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.4 GiB 1 KiB 74 MiB 19 GiB 7.18 1.21 197 up osd.1 2025-06-22 20:23:45.629792 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 953 MiB 883 MiB 1 KiB 70 MiB 19 GiB 4.66 0.79 191 up osd.5 2025-06-22 20:23:45.629796 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-22 20:23:45.629800 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.43 1.09 192 up osd.2 2025-06-22 20:23:45.629809 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.40 0.91 200 up osd.3 2025-06-22 20:23:45.629813 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-06-22 20:23:45.629817 | orchestrator | MIN/MAX VAR: 0.69/1.31 STDDEV: 1.32 2025-06-22 20:23:45.673023 | orchestrator | 2025-06-22 20:23:45.673060 | orchestrator | # Ceph monitor status 2025-06-22 20:23:45.673065 | orchestrator | 2025-06-22 20:23:45.673070 | orchestrator | + echo 2025-06-22 20:23:45.673075 | orchestrator | + echo '# Ceph monitor status' 2025-06-22 20:23:45.673079 | orchestrator | + echo 2025-06-22 20:23:45.673083 | orchestrator | + ceph mon stat 2025-06-22 20:23:46.266509 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-22 20:23:46.314158 | orchestrator | 2025-06-22 20:23:46.314202 | orchestrator | # Ceph quorum status 2025-06-22 20:23:46.314208 | orchestrator | 2025-06-22 20:23:46.314212 | orchestrator | + echo 2025-06-22 20:23:46.314216 | orchestrator | + echo '# Ceph quorum status' 2025-06-22 20:23:46.314220 | orchestrator | + echo 2025-06-22 20:23:46.314425 | orchestrator | + ceph quorum_status 2025-06-22 20:23:46.315283 | orchestrator | + jq 2025-06-22 20:23:46.943548 | orchestrator | { 2025-06-22 20:23:46.943646 | orchestrator | "election_epoch": 8, 2025-06-22 20:23:46.943652 | orchestrator | "quorum": [ 2025-06-22 20:23:46.943657 | orchestrator | 0, 2025-06-22 20:23:46.943661 | orchestrator | 1, 2025-06-22 20:23:46.943665 | orchestrator | 2 2025-06-22 20:23:46.943669 | orchestrator | ], 2025-06-22 20:23:46.943673 | orchestrator | "quorum_names": [ 2025-06-22 20:23:46.943677 | orchestrator | "testbed-node-0", 2025-06-22 20:23:46.943681 | orchestrator | "testbed-node-1", 2025-06-22 20:23:46.943685 | orchestrator | "testbed-node-2" 2025-06-22 20:23:46.943689 | orchestrator | ], 2025-06-22 20:23:46.943693 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-22 20:23:46.943698 | orchestrator | "quorum_age": 1699, 2025-06-22 20:23:46.943702 | orchestrator | "features": { 2025-06-22 20:23:46.943706 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-22 20:23:46.943709 | orchestrator | "quorum_mon": [ 2025-06-22 20:23:46.943713 | orchestrator | "kraken", 2025-06-22 20:23:46.943717 | orchestrator | "luminous", 2025-06-22 20:23:46.943721 | orchestrator | "mimic", 2025-06-22 20:23:46.943725 | orchestrator | "osdmap-prune", 2025-06-22 20:23:46.943729 | orchestrator | "nautilus", 2025-06-22 20:23:46.943732 | orchestrator | "octopus", 2025-06-22 20:23:46.943736 | orchestrator | "pacific", 2025-06-22 20:23:46.943740 | orchestrator | "elector-pinging", 2025-06-22 20:23:46.943744 | orchestrator | "quincy", 2025-06-22 20:23:46.943747 | orchestrator | "reef" 2025-06-22 20:23:46.943751 | orchestrator | ] 2025-06-22 20:23:46.943755 | orchestrator | }, 2025-06-22 20:23:46.943759 | orchestrator | "monmap": { 2025-06-22 20:23:46.943783 | orchestrator | "epoch": 1, 2025-06-22 20:23:46.943787 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-22 20:23:46.943791 | orchestrator | "modified": "2025-06-22T19:55:09.496217Z", 2025-06-22 20:23:46.943795 | orchestrator | "created": "2025-06-22T19:55:09.496217Z", 2025-06-22 20:23:46.943799 | orchestrator | "min_mon_release": 18, 2025-06-22 20:23:46.943803 | orchestrator | "min_mon_release_name": "reef", 2025-06-22 20:23:46.943807 | orchestrator | "election_strategy": 1, 2025-06-22 20:23:46.943810 | orchestrator | "disallowed_leaders: ": "", 2025-06-22 20:23:46.943814 | orchestrator | "stretch_mode": false, 2025-06-22 20:23:46.943818 | orchestrator | "tiebreaker_mon": "", 2025-06-22 20:23:46.943822 | orchestrator | "removed_ranks: ": "", 2025-06-22 20:23:46.943825 | orchestrator | "features": { 2025-06-22 20:23:46.943829 | orchestrator | "persistent": [ 2025-06-22 20:23:46.943833 | orchestrator | "kraken", 2025-06-22 20:23:46.943837 | orchestrator | "luminous", 2025-06-22 20:23:46.943840 | orchestrator | "mimic", 2025-06-22 20:23:46.943844 | orchestrator | "osdmap-prune", 2025-06-22 20:23:46.943848 | orchestrator | "nautilus", 2025-06-22 20:23:46.943851 | orchestrator | "octopus", 2025-06-22 20:23:46.943855 | orchestrator | "pacific", 2025-06-22 20:23:46.943859 | orchestrator | "elector-pinging", 2025-06-22 20:23:46.943863 | orchestrator | "quincy", 2025-06-22 20:23:46.943867 | orchestrator | "reef" 2025-06-22 20:23:46.943870 | orchestrator | ], 2025-06-22 20:23:46.943874 | orchestrator | "optional": [] 2025-06-22 20:23:46.943878 | orchestrator | }, 2025-06-22 20:23:46.943882 | orchestrator | "mons": [ 2025-06-22 20:23:46.943885 | orchestrator | { 2025-06-22 20:23:46.943889 | orchestrator | "rank": 0, 2025-06-22 20:23:46.943893 | orchestrator | "name": "testbed-node-0", 2025-06-22 20:23:46.943897 | orchestrator | "public_addrs": { 2025-06-22 20:23:46.943900 | orchestrator | "addrvec": [ 2025-06-22 20:23:46.943904 | orchestrator | { 2025-06-22 20:23:46.943908 | orchestrator | "type": "v2", 2025-06-22 20:23:46.943911 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-22 20:23:46.943915 | orchestrator | "nonce": 0 2025-06-22 20:23:46.943919 | orchestrator | }, 2025-06-22 20:23:46.943923 | orchestrator | { 2025-06-22 20:23:46.943926 | orchestrator | "type": "v1", 2025-06-22 20:23:46.943930 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-22 20:23:46.943934 | orchestrator | "nonce": 0 2025-06-22 20:23:46.943938 | orchestrator | } 2025-06-22 20:23:46.943941 | orchestrator | ] 2025-06-22 20:23:46.943945 | orchestrator | }, 2025-06-22 20:23:46.943949 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-22 20:23:46.943953 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-22 20:23:46.943956 | orchestrator | "priority": 0, 2025-06-22 20:23:46.943960 | orchestrator | "weight": 0, 2025-06-22 20:23:46.943964 | orchestrator | "crush_location": "{}" 2025-06-22 20:23:46.943967 | orchestrator | }, 2025-06-22 20:23:46.943971 | orchestrator | { 2025-06-22 20:23:46.943975 | orchestrator | "rank": 1, 2025-06-22 20:23:46.943979 | orchestrator | "name": "testbed-node-1", 2025-06-22 20:23:46.943983 | orchestrator | "public_addrs": { 2025-06-22 20:23:46.943987 | orchestrator | "addrvec": [ 2025-06-22 20:23:46.943991 | orchestrator | { 2025-06-22 20:23:46.943994 | orchestrator | "type": "v2", 2025-06-22 20:23:46.943998 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-22 20:23:46.944002 | orchestrator | "nonce": 0 2025-06-22 20:23:46.944005 | orchestrator | }, 2025-06-22 20:23:46.944009 | orchestrator | { 2025-06-22 20:23:46.944013 | orchestrator | "type": "v1", 2025-06-22 20:23:46.944017 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-22 20:23:46.944020 | orchestrator | "nonce": 0 2025-06-22 20:23:46.944024 | orchestrator | } 2025-06-22 20:23:46.944028 | orchestrator | ] 2025-06-22 20:23:46.944031 | orchestrator | }, 2025-06-22 20:23:46.944035 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-22 20:23:46.944039 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-22 20:23:46.944043 | orchestrator | "priority": 0, 2025-06-22 20:23:46.944046 | orchestrator | "weight": 0, 2025-06-22 20:23:46.944050 | orchestrator | "crush_location": "{}" 2025-06-22 20:23:46.944054 | orchestrator | }, 2025-06-22 20:23:46.944058 | orchestrator | { 2025-06-22 20:23:46.944061 | orchestrator | "rank": 2, 2025-06-22 20:23:46.944065 | orchestrator | "name": "testbed-node-2", 2025-06-22 20:23:46.944069 | orchestrator | "public_addrs": { 2025-06-22 20:23:46.944076 | orchestrator | "addrvec": [ 2025-06-22 20:23:46.944079 | orchestrator | { 2025-06-22 20:23:46.944083 | orchestrator | "type": "v2", 2025-06-22 20:23:46.944087 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-22 20:23:46.944091 | orchestrator | "nonce": 0 2025-06-22 20:23:46.944094 | orchestrator | }, 2025-06-22 20:23:46.944098 | orchestrator | { 2025-06-22 20:23:46.944102 | orchestrator | "type": "v1", 2025-06-22 20:23:46.944105 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-22 20:23:46.944109 | orchestrator | "nonce": 0 2025-06-22 20:23:46.944113 | orchestrator | } 2025-06-22 20:23:46.944117 | orchestrator | ] 2025-06-22 20:23:46.944120 | orchestrator | }, 2025-06-22 20:23:46.944124 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-22 20:23:46.944128 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-22 20:23:46.944131 | orchestrator | "priority": 0, 2025-06-22 20:23:46.944135 | orchestrator | "weight": 0, 2025-06-22 20:23:46.944139 | orchestrator | "crush_location": "{}" 2025-06-22 20:23:46.944143 | orchestrator | } 2025-06-22 20:23:46.944146 | orchestrator | ] 2025-06-22 20:23:46.944150 | orchestrator | } 2025-06-22 20:23:46.944154 | orchestrator | } 2025-06-22 20:23:46.944158 | orchestrator | 2025-06-22 20:23:46.944162 | orchestrator | # Ceph free space status 2025-06-22 20:23:46.944165 | orchestrator | 2025-06-22 20:23:46.944177 | orchestrator | + echo 2025-06-22 20:23:46.944181 | orchestrator | + echo '# Ceph free space status' 2025-06-22 20:23:46.944185 | orchestrator | + echo 2025-06-22 20:23:46.944188 | orchestrator | + ceph df 2025-06-22 20:23:47.504314 | orchestrator | --- RAW STORAGE --- 2025-06-22 20:23:47.504410 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-22 20:23:47.504423 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-22 20:23:47.504433 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-22 20:23:47.504443 | orchestrator | 2025-06-22 20:23:47.504490 | orchestrator | --- POOLS --- 2025-06-22 20:23:47.504503 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-22 20:23:47.504514 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-06-22 20:23:47.504524 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-22 20:23:47.504534 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-22 20:23:47.504543 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-22 20:23:47.504553 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-22 20:23:47.504562 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-22 20:23:47.504572 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-22 20:23:47.504581 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-22 20:23:47.504591 | orchestrator | .rgw.root 9 32 3.0 KiB 7 56 KiB 0 52 GiB 2025-06-22 20:23:47.504600 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 20:23:47.504610 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 20:23:47.504619 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.98 35 GiB 2025-06-22 20:23:47.504628 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 20:23:47.504638 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 20:23:47.549496 | orchestrator | ++ semver latest 5.0.0 2025-06-22 20:23:47.608213 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-22 20:23:47.608309 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-22 20:23:47.608325 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-22 20:23:47.608336 | orchestrator | + osism apply facts 2025-06-22 20:23:49.343782 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:23:49.343883 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:23:49.343919 | orchestrator | Registering Redlock._release_script 2025-06-22 20:23:49.400154 | orchestrator | 2025-06-22 20:23:49 | INFO  | Task 0d89448f-b242-4222-a300-897dccb7f7ea (facts) was prepared for execution. 2025-06-22 20:23:49.400248 | orchestrator | 2025-06-22 20:23:49 | INFO  | It takes a moment until task 0d89448f-b242-4222-a300-897dccb7f7ea (facts) has been started and output is visible here. 2025-06-22 20:24:01.673344 | orchestrator | 2025-06-22 20:24:01.673453 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-22 20:24:01.673499 | orchestrator | 2025-06-22 20:24:01.673513 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 20:24:01.673525 | orchestrator | Sunday 22 June 2025 20:23:53 +0000 (0:00:00.202) 0:00:00.202 *********** 2025-06-22 20:24:01.673537 | orchestrator | ok: [testbed-manager] 2025-06-22 20:24:01.673549 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:01.673560 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:01.673571 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:01.673582 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:01.673593 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:01.673603 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:01.673614 | orchestrator | 2025-06-22 20:24:01.673625 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 20:24:01.673637 | orchestrator | Sunday 22 June 2025 20:23:54 +0000 (0:00:01.317) 0:00:01.519 *********** 2025-06-22 20:24:01.673648 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:24:01.673660 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:01.673670 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:01.673681 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:01.673692 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:01.673703 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:24:01.673714 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:24:01.673725 | orchestrator | 2025-06-22 20:24:01.673736 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 20:24:01.673747 | orchestrator | 2025-06-22 20:24:01.673758 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 20:24:01.673769 | orchestrator | Sunday 22 June 2025 20:23:55 +0000 (0:00:01.099) 0:00:02.618 *********** 2025-06-22 20:24:01.673780 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:01.673791 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:01.673802 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:01.673812 | orchestrator | ok: [testbed-manager] 2025-06-22 20:24:01.673823 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:01.673834 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:01.673845 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:01.673856 | orchestrator | 2025-06-22 20:24:01.673867 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 20:24:01.673878 | orchestrator | 2025-06-22 20:24:01.673892 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 20:24:01.673904 | orchestrator | Sunday 22 June 2025 20:24:00 +0000 (0:00:05.067) 0:00:07.686 *********** 2025-06-22 20:24:01.673917 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:24:01.673930 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:01.673943 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:01.673955 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:01.673967 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:01.673980 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:24:01.673992 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:24:01.674005 | orchestrator | 2025-06-22 20:24:01.674105 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:24:01.674125 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:01.674139 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:01.674152 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:01.674165 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:01.674223 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:01.674237 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:01.674248 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:01.674259 | orchestrator | 2025-06-22 20:24:01.674270 | orchestrator | 2025-06-22 20:24:01.674281 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:24:01.674292 | orchestrator | Sunday 22 June 2025 20:24:01 +0000 (0:00:00.539) 0:00:08.226 *********** 2025-06-22 20:24:01.674303 | orchestrator | =============================================================================== 2025-06-22 20:24:01.674313 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.07s 2025-06-22 20:24:01.674324 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.32s 2025-06-22 20:24:01.674335 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2025-06-22 20:24:01.674346 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-06-22 20:24:01.932010 | orchestrator | + osism validate ceph-mons 2025-06-22 20:24:03.679133 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:24:03.679240 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:24:03.679255 | orchestrator | Registering Redlock._release_script 2025-06-22 20:24:22.134532 | orchestrator | 2025-06-22 20:24:22.134654 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-22 20:24:22.134672 | orchestrator | 2025-06-22 20:24:22.134684 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-22 20:24:22.134696 | orchestrator | Sunday 22 June 2025 20:24:07 +0000 (0:00:00.335) 0:00:00.335 *********** 2025-06-22 20:24:22.134708 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:22.134719 | orchestrator | 2025-06-22 20:24:22.134730 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-22 20:24:22.134741 | orchestrator | Sunday 22 June 2025 20:24:08 +0000 (0:00:00.572) 0:00:00.907 *********** 2025-06-22 20:24:22.134751 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:22.134762 | orchestrator | 2025-06-22 20:24:22.134773 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-22 20:24:22.134784 | orchestrator | Sunday 22 June 2025 20:24:08 +0000 (0:00:00.695) 0:00:01.603 *********** 2025-06-22 20:24:22.134795 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.134806 | orchestrator | 2025-06-22 20:24:22.134818 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-22 20:24:22.134829 | orchestrator | Sunday 22 June 2025 20:24:09 +0000 (0:00:00.209) 0:00:01.812 *********** 2025-06-22 20:24:22.134840 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.134850 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:22.134861 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:22.134872 | orchestrator | 2025-06-22 20:24:22.134883 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-22 20:24:22.134894 | orchestrator | Sunday 22 June 2025 20:24:09 +0000 (0:00:00.251) 0:00:02.063 *********** 2025-06-22 20:24:22.134905 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:22.134915 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:22.134926 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.134937 | orchestrator | 2025-06-22 20:24:22.134948 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-22 20:24:22.134958 | orchestrator | Sunday 22 June 2025 20:24:10 +0000 (0:00:00.983) 0:00:03.047 *********** 2025-06-22 20:24:22.134969 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.134980 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:22.135012 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:22.135023 | orchestrator | 2025-06-22 20:24:22.135034 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-22 20:24:22.135045 | orchestrator | Sunday 22 June 2025 20:24:10 +0000 (0:00:00.251) 0:00:03.299 *********** 2025-06-22 20:24:22.135055 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.135066 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:22.135077 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:22.135087 | orchestrator | 2025-06-22 20:24:22.135098 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:24:22.135109 | orchestrator | Sunday 22 June 2025 20:24:10 +0000 (0:00:00.376) 0:00:03.675 *********** 2025-06-22 20:24:22.135120 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.135130 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:22.135141 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:22.135152 | orchestrator | 2025-06-22 20:24:22.135163 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-22 20:24:22.135173 | orchestrator | Sunday 22 June 2025 20:24:11 +0000 (0:00:00.278) 0:00:03.954 *********** 2025-06-22 20:24:22.135184 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.135195 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:22.135206 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:22.135217 | orchestrator | 2025-06-22 20:24:22.135227 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-22 20:24:22.135238 | orchestrator | Sunday 22 June 2025 20:24:11 +0000 (0:00:00.264) 0:00:04.218 *********** 2025-06-22 20:24:22.135249 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.135260 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:22.135270 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:22.135281 | orchestrator | 2025-06-22 20:24:22.135292 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:24:22.135303 | orchestrator | Sunday 22 June 2025 20:24:11 +0000 (0:00:00.296) 0:00:04.515 *********** 2025-06-22 20:24:22.135314 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.135324 | orchestrator | 2025-06-22 20:24:22.135335 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:24:22.135346 | orchestrator | Sunday 22 June 2025 20:24:12 +0000 (0:00:00.660) 0:00:05.175 *********** 2025-06-22 20:24:22.135357 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.135368 | orchestrator | 2025-06-22 20:24:22.135379 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:24:22.135389 | orchestrator | Sunday 22 June 2025 20:24:12 +0000 (0:00:00.239) 0:00:05.415 *********** 2025-06-22 20:24:22.135401 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.135412 | orchestrator | 2025-06-22 20:24:22.135422 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:22.135433 | orchestrator | Sunday 22 June 2025 20:24:12 +0000 (0:00:00.238) 0:00:05.653 *********** 2025-06-22 20:24:22.135444 | orchestrator | 2025-06-22 20:24:22.135455 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:22.135516 | orchestrator | Sunday 22 June 2025 20:24:13 +0000 (0:00:00.068) 0:00:05.722 *********** 2025-06-22 20:24:22.135538 | orchestrator | 2025-06-22 20:24:22.135556 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:22.135567 | orchestrator | Sunday 22 June 2025 20:24:13 +0000 (0:00:00.066) 0:00:05.789 *********** 2025-06-22 20:24:22.135578 | orchestrator | 2025-06-22 20:24:22.135589 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:24:22.135600 | orchestrator | Sunday 22 June 2025 20:24:13 +0000 (0:00:00.069) 0:00:05.858 *********** 2025-06-22 20:24:22.135611 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.135621 | orchestrator | 2025-06-22 20:24:22.135632 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-22 20:24:22.135643 | orchestrator | Sunday 22 June 2025 20:24:13 +0000 (0:00:00.246) 0:00:06.104 *********** 2025-06-22 20:24:22.135662 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.135673 | orchestrator | 2025-06-22 20:24:22.135706 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-22 20:24:22.135718 | orchestrator | Sunday 22 June 2025 20:24:13 +0000 (0:00:00.238) 0:00:06.342 *********** 2025-06-22 20:24:22.135729 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.135739 | orchestrator | 2025-06-22 20:24:22.135750 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-22 20:24:22.135760 | orchestrator | Sunday 22 June 2025 20:24:13 +0000 (0:00:00.115) 0:00:06.458 *********** 2025-06-22 20:24:22.135788 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:24:22.135799 | orchestrator | 2025-06-22 20:24:22.135810 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-22 20:24:22.135821 | orchestrator | Sunday 22 June 2025 20:24:15 +0000 (0:00:01.584) 0:00:08.042 *********** 2025-06-22 20:24:22.135832 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.135842 | orchestrator | 2025-06-22 20:24:22.135853 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-22 20:24:22.135864 | orchestrator | Sunday 22 June 2025 20:24:15 +0000 (0:00:00.327) 0:00:08.370 *********** 2025-06-22 20:24:22.135875 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.135886 | orchestrator | 2025-06-22 20:24:22.135897 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-22 20:24:22.135908 | orchestrator | Sunday 22 June 2025 20:24:16 +0000 (0:00:00.328) 0:00:08.698 *********** 2025-06-22 20:24:22.135919 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.135930 | orchestrator | 2025-06-22 20:24:22.135941 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-22 20:24:22.135951 | orchestrator | Sunday 22 June 2025 20:24:16 +0000 (0:00:00.307) 0:00:09.006 *********** 2025-06-22 20:24:22.135962 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.135973 | orchestrator | 2025-06-22 20:24:22.135984 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-22 20:24:22.135995 | orchestrator | Sunday 22 June 2025 20:24:16 +0000 (0:00:00.317) 0:00:09.324 *********** 2025-06-22 20:24:22.136006 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.136016 | orchestrator | 2025-06-22 20:24:22.136027 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-22 20:24:22.136038 | orchestrator | Sunday 22 June 2025 20:24:16 +0000 (0:00:00.110) 0:00:09.434 *********** 2025-06-22 20:24:22.136049 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.136060 | orchestrator | 2025-06-22 20:24:22.136070 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-22 20:24:22.136081 | orchestrator | Sunday 22 June 2025 20:24:16 +0000 (0:00:00.120) 0:00:09.555 *********** 2025-06-22 20:24:22.136092 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.136103 | orchestrator | 2025-06-22 20:24:22.136114 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-22 20:24:22.136125 | orchestrator | Sunday 22 June 2025 20:24:16 +0000 (0:00:00.113) 0:00:09.669 *********** 2025-06-22 20:24:22.136135 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:24:22.136146 | orchestrator | 2025-06-22 20:24:22.136157 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-22 20:24:22.136168 | orchestrator | Sunday 22 June 2025 20:24:18 +0000 (0:00:01.322) 0:00:10.991 *********** 2025-06-22 20:24:22.136179 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.136189 | orchestrator | 2025-06-22 20:24:22.136200 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-22 20:24:22.136211 | orchestrator | Sunday 22 June 2025 20:24:18 +0000 (0:00:00.294) 0:00:11.286 *********** 2025-06-22 20:24:22.136222 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.136232 | orchestrator | 2025-06-22 20:24:22.136243 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-22 20:24:22.136254 | orchestrator | Sunday 22 June 2025 20:24:18 +0000 (0:00:00.128) 0:00:11.415 *********** 2025-06-22 20:24:22.136272 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:22.136283 | orchestrator | 2025-06-22 20:24:22.136293 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-22 20:24:22.136304 | orchestrator | Sunday 22 June 2025 20:24:18 +0000 (0:00:00.145) 0:00:11.560 *********** 2025-06-22 20:24:22.136315 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.136326 | orchestrator | 2025-06-22 20:24:22.136337 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-22 20:24:22.136348 | orchestrator | Sunday 22 June 2025 20:24:19 +0000 (0:00:00.140) 0:00:11.700 *********** 2025-06-22 20:24:22.136358 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.136369 | orchestrator | 2025-06-22 20:24:22.136386 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-22 20:24:22.136398 | orchestrator | Sunday 22 June 2025 20:24:19 +0000 (0:00:00.278) 0:00:11.979 *********** 2025-06-22 20:24:22.136408 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:22.136419 | orchestrator | 2025-06-22 20:24:22.136430 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-22 20:24:22.136441 | orchestrator | Sunday 22 June 2025 20:24:19 +0000 (0:00:00.253) 0:00:12.233 *********** 2025-06-22 20:24:22.136452 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:22.136463 | orchestrator | 2025-06-22 20:24:22.136500 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:24:22.136512 | orchestrator | Sunday 22 June 2025 20:24:19 +0000 (0:00:00.239) 0:00:12.473 *********** 2025-06-22 20:24:22.136522 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:22.136533 | orchestrator | 2025-06-22 20:24:22.136544 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:24:22.136554 | orchestrator | Sunday 22 June 2025 20:24:21 +0000 (0:00:01.580) 0:00:14.054 *********** 2025-06-22 20:24:22.136565 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:22.136575 | orchestrator | 2025-06-22 20:24:22.136586 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:24:22.136597 | orchestrator | Sunday 22 June 2025 20:24:21 +0000 (0:00:00.278) 0:00:14.333 *********** 2025-06-22 20:24:22.136608 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:22.136618 | orchestrator | 2025-06-22 20:24:22.136637 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:24.264975 | orchestrator | Sunday 22 June 2025 20:24:21 +0000 (0:00:00.246) 0:00:14.579 *********** 2025-06-22 20:24:24.265113 | orchestrator | 2025-06-22 20:24:24.265140 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:24.265159 | orchestrator | Sunday 22 June 2025 20:24:21 +0000 (0:00:00.073) 0:00:14.653 *********** 2025-06-22 20:24:24.265178 | orchestrator | 2025-06-22 20:24:24.265196 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:24.265214 | orchestrator | Sunday 22 June 2025 20:24:22 +0000 (0:00:00.083) 0:00:14.736 *********** 2025-06-22 20:24:24.265233 | orchestrator | 2025-06-22 20:24:24.265252 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-22 20:24:24.265269 | orchestrator | Sunday 22 June 2025 20:24:22 +0000 (0:00:00.075) 0:00:14.812 *********** 2025-06-22 20:24:24.265288 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:24.265300 | orchestrator | 2025-06-22 20:24:24.265311 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:24:24.265321 | orchestrator | Sunday 22 June 2025 20:24:23 +0000 (0:00:01.267) 0:00:16.080 *********** 2025-06-22 20:24:24.265332 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-22 20:24:24.265343 | orchestrator |  "msg": [ 2025-06-22 20:24:24.265356 | orchestrator |  "Validator run completed.", 2025-06-22 20:24:24.265368 | orchestrator |  "You can find the report file here:", 2025-06-22 20:24:24.265379 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-22T20:24:08+00:00-report.json", 2025-06-22 20:24:24.265417 | orchestrator |  "on the following host:", 2025-06-22 20:24:24.265428 | orchestrator |  "testbed-manager" 2025-06-22 20:24:24.265444 | orchestrator |  ] 2025-06-22 20:24:24.265455 | orchestrator | } 2025-06-22 20:24:24.265503 | orchestrator | 2025-06-22 20:24:24.265518 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:24:24.265533 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-22 20:24:24.265547 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:24.265560 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:24.265572 | orchestrator | 2025-06-22 20:24:24.265584 | orchestrator | 2025-06-22 20:24:24.265597 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:24:24.265610 | orchestrator | Sunday 22 June 2025 20:24:23 +0000 (0:00:00.570) 0:00:16.650 *********** 2025-06-22 20:24:24.265622 | orchestrator | =============================================================================== 2025-06-22 20:24:24.265634 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.58s 2025-06-22 20:24:24.265646 | orchestrator | Aggregate test results step one ----------------------------------------- 1.58s 2025-06-22 20:24:24.265658 | orchestrator | Gather status data ------------------------------------------------------ 1.32s 2025-06-22 20:24:24.265670 | orchestrator | Write report file ------------------------------------------------------- 1.27s 2025-06-22 20:24:24.265682 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-06-22 20:24:24.265694 | orchestrator | Create report output directory ------------------------------------------ 0.70s 2025-06-22 20:24:24.265706 | orchestrator | Aggregate test results step one ----------------------------------------- 0.66s 2025-06-22 20:24:24.265733 | orchestrator | Get timestamp for report file ------------------------------------------- 0.57s 2025-06-22 20:24:24.265745 | orchestrator | Print report file information ------------------------------------------- 0.57s 2025-06-22 20:24:24.265769 | orchestrator | Set test result to passed if container is existing ---------------------- 0.38s 2025-06-22 20:24:24.265781 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.33s 2025-06-22 20:24:24.265795 | orchestrator | Set quorum test data ---------------------------------------------------- 0.33s 2025-06-22 20:24:24.265808 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2025-06-22 20:24:24.265819 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.31s 2025-06-22 20:24:24.265830 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.30s 2025-06-22 20:24:24.265841 | orchestrator | Set health test data ---------------------------------------------------- 0.29s 2025-06-22 20:24:24.265852 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.28s 2025-06-22 20:24:24.265862 | orchestrator | Aggregate test results step two ----------------------------------------- 0.28s 2025-06-22 20:24:24.265873 | orchestrator | Prepare test data ------------------------------------------------------- 0.28s 2025-06-22 20:24:24.265884 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.26s 2025-06-22 20:24:24.523101 | orchestrator | + osism validate ceph-mgrs 2025-06-22 20:24:26.315795 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:24:26.315894 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:24:26.315910 | orchestrator | Registering Redlock._release_script 2025-06-22 20:24:44.704578 | orchestrator | 2025-06-22 20:24:44.704685 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-22 20:24:44.704701 | orchestrator | 2025-06-22 20:24:44.704714 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-22 20:24:44.704754 | orchestrator | Sunday 22 June 2025 20:24:30 +0000 (0:00:00.464) 0:00:00.464 *********** 2025-06-22 20:24:44.704767 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:44.704778 | orchestrator | 2025-06-22 20:24:44.704789 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-22 20:24:44.704800 | orchestrator | Sunday 22 June 2025 20:24:31 +0000 (0:00:00.640) 0:00:01.105 *********** 2025-06-22 20:24:44.704810 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:44.704821 | orchestrator | 2025-06-22 20:24:44.704832 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-22 20:24:44.704843 | orchestrator | Sunday 22 June 2025 20:24:32 +0000 (0:00:00.789) 0:00:01.894 *********** 2025-06-22 20:24:44.704854 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:44.704866 | orchestrator | 2025-06-22 20:24:44.704877 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-22 20:24:44.704888 | orchestrator | Sunday 22 June 2025 20:24:32 +0000 (0:00:00.243) 0:00:02.137 *********** 2025-06-22 20:24:44.704899 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:44.704909 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:44.704920 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:44.704931 | orchestrator | 2025-06-22 20:24:44.704942 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-22 20:24:44.704952 | orchestrator | Sunday 22 June 2025 20:24:32 +0000 (0:00:00.284) 0:00:02.421 *********** 2025-06-22 20:24:44.704965 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:44.704983 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:44.704997 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:44.705008 | orchestrator | 2025-06-22 20:24:44.705019 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-22 20:24:44.705030 | orchestrator | Sunday 22 June 2025 20:24:33 +0000 (0:00:00.993) 0:00:03.414 *********** 2025-06-22 20:24:44.705041 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:44.705051 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:44.705062 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:44.705072 | orchestrator | 2025-06-22 20:24:44.705083 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-22 20:24:44.705096 | orchestrator | Sunday 22 June 2025 20:24:33 +0000 (0:00:00.288) 0:00:03.703 *********** 2025-06-22 20:24:44.705108 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:44.705120 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:44.705132 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:44.705144 | orchestrator | 2025-06-22 20:24:44.705156 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:24:44.705168 | orchestrator | Sunday 22 June 2025 20:24:34 +0000 (0:00:00.484) 0:00:04.187 *********** 2025-06-22 20:24:44.705180 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:44.705192 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:44.705222 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:44.705235 | orchestrator | 2025-06-22 20:24:44.705247 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-22 20:24:44.705259 | orchestrator | Sunday 22 June 2025 20:24:34 +0000 (0:00:00.308) 0:00:04.495 *********** 2025-06-22 20:24:44.705272 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:44.705284 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:44.705296 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:44.705308 | orchestrator | 2025-06-22 20:24:44.705320 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-22 20:24:44.705333 | orchestrator | Sunday 22 June 2025 20:24:34 +0000 (0:00:00.295) 0:00:04.791 *********** 2025-06-22 20:24:44.705345 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:44.705357 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:44.705370 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:44.705382 | orchestrator | 2025-06-22 20:24:44.705394 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:24:44.705417 | orchestrator | Sunday 22 June 2025 20:24:35 +0000 (0:00:00.275) 0:00:05.067 *********** 2025-06-22 20:24:44.705429 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:44.705441 | orchestrator | 2025-06-22 20:24:44.705454 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:24:44.705464 | orchestrator | Sunday 22 June 2025 20:24:35 +0000 (0:00:00.666) 0:00:05.733 *********** 2025-06-22 20:24:44.705497 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:44.705508 | orchestrator | 2025-06-22 20:24:44.705519 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:24:44.705530 | orchestrator | Sunday 22 June 2025 20:24:36 +0000 (0:00:00.239) 0:00:05.973 *********** 2025-06-22 20:24:44.705541 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:44.705552 | orchestrator | 2025-06-22 20:24:44.705562 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:44.705579 | orchestrator | Sunday 22 June 2025 20:24:36 +0000 (0:00:00.237) 0:00:06.210 *********** 2025-06-22 20:24:44.705590 | orchestrator | 2025-06-22 20:24:44.705600 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:44.705611 | orchestrator | Sunday 22 June 2025 20:24:36 +0000 (0:00:00.071) 0:00:06.281 *********** 2025-06-22 20:24:44.705622 | orchestrator | 2025-06-22 20:24:44.705633 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:44.705644 | orchestrator | Sunday 22 June 2025 20:24:36 +0000 (0:00:00.068) 0:00:06.350 *********** 2025-06-22 20:24:44.705654 | orchestrator | 2025-06-22 20:24:44.705665 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:24:44.705676 | orchestrator | Sunday 22 June 2025 20:24:36 +0000 (0:00:00.070) 0:00:06.421 *********** 2025-06-22 20:24:44.705687 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:44.705697 | orchestrator | 2025-06-22 20:24:44.705708 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-22 20:24:44.705719 | orchestrator | Sunday 22 June 2025 20:24:36 +0000 (0:00:00.251) 0:00:06.673 *********** 2025-06-22 20:24:44.705730 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:44.705741 | orchestrator | 2025-06-22 20:24:44.705770 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-22 20:24:44.705782 | orchestrator | Sunday 22 June 2025 20:24:37 +0000 (0:00:00.246) 0:00:06.919 *********** 2025-06-22 20:24:44.705793 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:44.705804 | orchestrator | 2025-06-22 20:24:44.705815 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-22 20:24:44.705826 | orchestrator | Sunday 22 June 2025 20:24:37 +0000 (0:00:00.115) 0:00:07.035 *********** 2025-06-22 20:24:44.705837 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:24:44.705847 | orchestrator | 2025-06-22 20:24:44.705858 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-22 20:24:44.705870 | orchestrator | Sunday 22 June 2025 20:24:39 +0000 (0:00:01.949) 0:00:08.984 *********** 2025-06-22 20:24:44.705881 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:44.705892 | orchestrator | 2025-06-22 20:24:44.705902 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-22 20:24:44.705913 | orchestrator | Sunday 22 June 2025 20:24:39 +0000 (0:00:00.229) 0:00:09.213 *********** 2025-06-22 20:24:44.705924 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:44.705935 | orchestrator | 2025-06-22 20:24:44.705946 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-22 20:24:44.705957 | orchestrator | Sunday 22 June 2025 20:24:39 +0000 (0:00:00.485) 0:00:09.699 *********** 2025-06-22 20:24:44.705968 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:44.705979 | orchestrator | 2025-06-22 20:24:44.705989 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-22 20:24:44.706000 | orchestrator | Sunday 22 June 2025 20:24:39 +0000 (0:00:00.159) 0:00:09.859 *********** 2025-06-22 20:24:44.706011 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:44.706087 | orchestrator | 2025-06-22 20:24:44.706107 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-22 20:24:44.706119 | orchestrator | Sunday 22 June 2025 20:24:40 +0000 (0:00:00.145) 0:00:10.005 *********** 2025-06-22 20:24:44.706129 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:44.706140 | orchestrator | 2025-06-22 20:24:44.706151 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-22 20:24:44.706162 | orchestrator | Sunday 22 June 2025 20:24:40 +0000 (0:00:00.251) 0:00:10.257 *********** 2025-06-22 20:24:44.706173 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:44.706184 | orchestrator | 2025-06-22 20:24:44.706195 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:24:44.706205 | orchestrator | Sunday 22 June 2025 20:24:40 +0000 (0:00:00.233) 0:00:10.490 *********** 2025-06-22 20:24:44.706216 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:44.706227 | orchestrator | 2025-06-22 20:24:44.706238 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:24:44.706249 | orchestrator | Sunday 22 June 2025 20:24:41 +0000 (0:00:01.226) 0:00:11.717 *********** 2025-06-22 20:24:44.706260 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:44.706270 | orchestrator | 2025-06-22 20:24:44.706281 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:24:44.706292 | orchestrator | Sunday 22 June 2025 20:24:42 +0000 (0:00:00.265) 0:00:11.982 *********** 2025-06-22 20:24:44.706303 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:44.706314 | orchestrator | 2025-06-22 20:24:44.706325 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:44.706336 | orchestrator | Sunday 22 June 2025 20:24:42 +0000 (0:00:00.251) 0:00:12.234 *********** 2025-06-22 20:24:44.706347 | orchestrator | 2025-06-22 20:24:44.706357 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:44.706368 | orchestrator | Sunday 22 June 2025 20:24:42 +0000 (0:00:00.071) 0:00:12.305 *********** 2025-06-22 20:24:44.706379 | orchestrator | 2025-06-22 20:24:44.706390 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:44.706401 | orchestrator | Sunday 22 June 2025 20:24:42 +0000 (0:00:00.069) 0:00:12.375 *********** 2025-06-22 20:24:44.706412 | orchestrator | 2025-06-22 20:24:44.706422 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-22 20:24:44.706433 | orchestrator | Sunday 22 June 2025 20:24:42 +0000 (0:00:00.073) 0:00:12.448 *********** 2025-06-22 20:24:44.706444 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:44.706455 | orchestrator | 2025-06-22 20:24:44.706466 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:24:44.706534 | orchestrator | Sunday 22 June 2025 20:24:44 +0000 (0:00:01.698) 0:00:14.146 *********** 2025-06-22 20:24:44.706547 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-22 20:24:44.706558 | orchestrator |  "msg": [ 2025-06-22 20:24:44.706575 | orchestrator |  "Validator run completed.", 2025-06-22 20:24:44.706587 | orchestrator |  "You can find the report file here:", 2025-06-22 20:24:44.706598 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-22T20:24:31+00:00-report.json", 2025-06-22 20:24:44.706610 | orchestrator |  "on the following host:", 2025-06-22 20:24:44.706621 | orchestrator |  "testbed-manager" 2025-06-22 20:24:44.706632 | orchestrator |  ] 2025-06-22 20:24:44.706643 | orchestrator | } 2025-06-22 20:24:44.706654 | orchestrator | 2025-06-22 20:24:44.706665 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:24:44.706678 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:24:44.706690 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:44.706721 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:44.998187 | orchestrator | 2025-06-22 20:24:44.998286 | orchestrator | 2025-06-22 20:24:44.998300 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:24:44.998315 | orchestrator | Sunday 22 June 2025 20:24:44 +0000 (0:00:00.405) 0:00:14.551 *********** 2025-06-22 20:24:44.998326 | orchestrator | =============================================================================== 2025-06-22 20:24:44.998336 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.95s 2025-06-22 20:24:44.998347 | orchestrator | Write report file ------------------------------------------------------- 1.70s 2025-06-22 20:24:44.998358 | orchestrator | Aggregate test results step one ----------------------------------------- 1.23s 2025-06-22 20:24:44.998369 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2025-06-22 20:24:44.998379 | orchestrator | Create report output directory ------------------------------------------ 0.79s 2025-06-22 20:24:44.998390 | orchestrator | Aggregate test results step one ----------------------------------------- 0.67s 2025-06-22 20:24:44.998401 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-06-22 20:24:44.998411 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.49s 2025-06-22 20:24:44.998422 | orchestrator | Set test result to passed if container is existing ---------------------- 0.48s 2025-06-22 20:24:44.998432 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-06-22 20:24:44.998443 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-06-22 20:24:44.998454 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.30s 2025-06-22 20:24:44.998464 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2025-06-22 20:24:44.998508 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2025-06-22 20:24:44.998520 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.28s 2025-06-22 20:24:44.998532 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2025-06-22 20:24:44.998542 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.25s 2025-06-22 20:24:44.998553 | orchestrator | Print report file information ------------------------------------------- 0.25s 2025-06-22 20:24:44.998564 | orchestrator | Aggregate test results step three --------------------------------------- 0.25s 2025-06-22 20:24:44.998575 | orchestrator | Fail due to missing containers ------------------------------------------ 0.25s 2025-06-22 20:24:45.269704 | orchestrator | + osism validate ceph-osds 2025-06-22 20:24:46.966685 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:24:46.967596 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:24:46.967624 | orchestrator | Registering Redlock._release_script 2025-06-22 20:24:54.311784 | orchestrator | 2025-06-22 20:24:54.311878 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-22 20:24:54.311894 | orchestrator | 2025-06-22 20:24:54.311905 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-22 20:24:54.311916 | orchestrator | Sunday 22 June 2025 20:24:50 +0000 (0:00:00.322) 0:00:00.322 *********** 2025-06-22 20:24:54.311928 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:54.311939 | orchestrator | 2025-06-22 20:24:54.311949 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 20:24:54.311960 | orchestrator | Sunday 22 June 2025 20:24:51 +0000 (0:00:00.588) 0:00:00.911 *********** 2025-06-22 20:24:54.311971 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:54.311982 | orchestrator | 2025-06-22 20:24:54.311993 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-22 20:24:54.312004 | orchestrator | Sunday 22 June 2025 20:24:51 +0000 (0:00:00.332) 0:00:01.244 *********** 2025-06-22 20:24:54.312050 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:54.312070 | orchestrator | 2025-06-22 20:24:54.312083 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-22 20:24:54.312093 | orchestrator | Sunday 22 June 2025 20:24:52 +0000 (0:00:00.756) 0:00:02.000 *********** 2025-06-22 20:24:54.312104 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:54.312116 | orchestrator | 2025-06-22 20:24:54.312128 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-22 20:24:54.312138 | orchestrator | Sunday 22 June 2025 20:24:52 +0000 (0:00:00.122) 0:00:02.123 *********** 2025-06-22 20:24:54.312149 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:54.312160 | orchestrator | 2025-06-22 20:24:54.312175 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-22 20:24:54.312195 | orchestrator | Sunday 22 June 2025 20:24:52 +0000 (0:00:00.125) 0:00:02.248 *********** 2025-06-22 20:24:54.312214 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:54.312226 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:24:54.312236 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:24:54.312247 | orchestrator | 2025-06-22 20:24:54.312258 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-22 20:24:54.312268 | orchestrator | Sunday 22 June 2025 20:24:52 +0000 (0:00:00.263) 0:00:02.512 *********** 2025-06-22 20:24:54.312279 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:54.312291 | orchestrator | 2025-06-22 20:24:54.312304 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-22 20:24:54.312316 | orchestrator | Sunday 22 June 2025 20:24:52 +0000 (0:00:00.132) 0:00:02.644 *********** 2025-06-22 20:24:54.312328 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:54.312340 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:54.312352 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:54.312364 | orchestrator | 2025-06-22 20:24:54.312377 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-22 20:24:54.312389 | orchestrator | Sunday 22 June 2025 20:24:53 +0000 (0:00:00.283) 0:00:02.927 *********** 2025-06-22 20:24:54.312401 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:54.312414 | orchestrator | 2025-06-22 20:24:54.312426 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:24:54.312438 | orchestrator | Sunday 22 June 2025 20:24:53 +0000 (0:00:00.452) 0:00:03.380 *********** 2025-06-22 20:24:54.312450 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:54.312462 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:54.312474 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:54.312513 | orchestrator | 2025-06-22 20:24:54.312525 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-22 20:24:54.312536 | orchestrator | Sunday 22 June 2025 20:24:54 +0000 (0:00:00.347) 0:00:03.728 *********** 2025-06-22 20:24:54.312565 | orchestrator | skipping: [testbed-node-3] => (item={'id': '10675a21218e797f77340cf7c92daebdff0fff8806ecf4e5067a6b439e21665b', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-22 20:24:54.312579 | orchestrator | skipping: [testbed-node-3] => (item={'id': '426936826a2bcc5878a58bf0880bfc674aea74bb59e4dcbd7fc4369ea94e25c3', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 20:24:54.312592 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd55f640692b03e159f15d504448f5306cdf9cbd7a5d7b49c94e06a4e1fa935cb', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-06-22 20:24:54.312604 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f8b23fc3310e67da981785ee83a6eb4f5bc2f0832ef0458561d806e2cc655135', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:24:54.312623 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e0543a1573f07f34cddb780b6a6184b8990d3fb88c1c636a9a74b36c3168dff2', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:24:54.312657 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5b39e3b07cc1742663f74455ea3fe61dd8495e463eb3e74bba12ab2e8ea2dd18', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:24:54.312674 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b6cb5c37acb244079c48e5aa46d3a97fd553fb4c8cbb074168e1b4a0e168284d', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:24:54.312685 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5f694159353b941cbb1c91f4a34964fa107bf0256949c573f4927dd7b734d985', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-22 20:24:54.312696 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd0c8c69ba3e803a2f4a5731f0a9d2bc76c83c19edd53d70aa998a65553b1c02e', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-22 20:24:54.312707 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd2cf50797e1e9cbf7f0ecb6ca8717e1b3401a4b03a358455d3e2bba4ea58c629', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-22 20:24:54.312722 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2a4ba57b4be60d2433793d0f604034b390c7ceebdf9fe1f004879073b75bc343', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-22 20:24:54.312734 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd70f1b81eb5e6b96858bd8a1a7039cbd8bdefc26f966aa5429bdc91aa02074ad', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-22 20:24:54.312745 | orchestrator | ok: [testbed-node-3] => (item={'id': '30651e54505f6018889b0c61a0b2cc51308d545b0c0a97927a0ac2ded9416ff9', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-22 20:24:54.312756 | orchestrator | ok: [testbed-node-3] => (item={'id': '6d3d47e52874fa7ce53f2f54776858c1b5ebf3fe762813ff2d7148ba723b1324', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-22 20:24:54.312767 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0ef17dd872becd35183dddce623e7befe17467f45388af8dbdf7d8bd3e5c8d86', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-22 20:24:54.312778 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4e7168ca7d523333d08837ce728a06ddb81e02d144c8d43ff731da86e203f4ef', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-22 20:24:54.312789 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3b8bee93b1494b9fd389a94ddfb34a7da4ec47ed33ea1ea0113912b65ee79ecb', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-22 20:24:54.312800 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f26c657dbbce6ba178d37d84d7ae0bcdb8005ea2d658ce27c8016a8fdf70268a', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-22 20:24:54.312817 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b5f8c4a225d36e764afbb2c259a6104f5d7521e149ec150d2dbd9eea3a602ecc', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-22 20:24:54.312828 | orchestrator | skipping: [testbed-node-3] => (item={'id': '655f48da362e82464f4d09b700689b35102d8def7cc98416e3f5cc226100de0f', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:24:54.312840 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'efe34543443b26c3cfef0eb03c7b2b8e4f30d4f21a757c266ee1a1e3c6764954', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-22 20:24:54.312857 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9f51259ce6696dcf7b8515ff86199a3fd089a88af4e34a7fd982730ae8fb4b01', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 20:24:54.549698 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2ffd65b7b2ed64bbd18e618de51e390af8dad90f033b51fa6aa752b15e04c301', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-06-22 20:24:54.549782 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f6c558b7758b2f2b13c4f5d5f3b75d89eef5cffa721b17707d78d183eca77c51', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:24:54.549798 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'fb254ec7012d4af2398cace34b5863d2827146fa2f27ca55fc55ee373fa6abfc', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:24:54.549811 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a7aa8be1bd8ee538c68d2d2c1ffc607226de0064c61648e06306d09c43b75a07', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:24:54.549837 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e8060cafa7ce2eba16c681fc39edb564e2df842109129365b9c56ef49bb97976', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:24:54.549850 | orchestrator | skipping: [testbed-node-4] => (item={'id': '058956db00ce29398c1c5b1e6ca71023883f66f82819681571f11727e48e05c3', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-22 20:24:54.549861 | orchestrator | skipping: [testbed-node-4] => (item={'id': '83a62313f416ccffea8b497d6342bd417f138c3f910a68d4383707ea44a5b8ba', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-22 20:24:54.549872 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a6587212844c62d0872f44e4abaad3ea8f4ab9b236ec92707c147e5eff388a03', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-22 20:24:54.549883 | orchestrator | skipping: [testbed-node-4] => (item={'id': '43ce74eb3d873bda1f2e7c0b21df95ea77f36bd71d7008160bcc2cd0a3afe417', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-22 20:24:54.549894 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f890db4696bb4197856b4a1bac87f2cb98bad973ac21a8322a38a5ee740bec07', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-22 20:24:54.549924 | orchestrator | ok: [testbed-node-4] => (item={'id': '4c77e06a3032c7984f4a0b741cabd2859654821fb9b7c2153642a0717f79969b', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-22 20:24:54.549936 | orchestrator | ok: [testbed-node-4] => (item={'id': '2ea3157159c2cd27533fa1063d491d7d4b1c8aac9c4505c67772cc294f3341a8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-22 20:24:54.549948 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0bb1f76f6e83f3d4a2ce379499b2eacaab7461124369d105091e2557ed4c3707', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-22 20:24:54.549959 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dcf211bb3cb65b916c625f0c40d9d5f97f5201ccb4e84c6c9aa5703303c1d8bd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-22 20:24:54.549971 | orchestrator | skipping: [testbed-node-4] => (item={'id': '54a4543013abdf40653142127b3c3ff5e5a40f400c7068febb0cafeeecfe6b7a', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-22 20:24:54.549998 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2e1996ba7ec364df5a052c6c0da6a090aeefdc738876fc45feafb3b50ce58ba9', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-22 20:24:54.550010 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9c4bd21b6e22b3a2f92e95271dacf63b2fe4abfd559b0a9df8a2a3475eb43d84', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-22 20:24:54.550072 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f0e0fdd030704ba6de3e6d06139662963e7c453d75cb4d5346f3d712d080056d', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:24:54.550085 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fc4b6f34796d7bcebde7f4bb1b18d8aa244d3b09f65c83e99cdfe540d67ab4cf', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-22 20:24:54.550096 | orchestrator | skipping: [testbed-node-5] => (item={'id': '79c9465d44a17a8aa6e6072e5ee0f33283bb5a3ccd7dd54f17a9133d083afb1d', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 20:24:54.550107 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3745269e918b85aa4087d164c64fef56ce88d62d9f3d7b7fa60245dcdd3660d6', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-06-22 20:24:54.550119 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ae61862dcc47cd3f494d64c490179f1ae5c00cdaf23eebf2c5fe8f0ca7233fff', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:24:54.550130 | orchestrator | skipping: [testbed-node-5] => (item={'id': '46fb160f4d842baa00587a10023c02cc2817947a8a98e7dbf57f209fffe4d4d8', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:24:54.550142 | orchestrator | skipping: [testbed-node-5] => (item={'id': '269fed5489f7b94cd27cb139a3a72f16f07f25cb317230c984041996c8319097', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:24:54.550167 | orchestrator | skipping: [testbed-node-5] => (item={'id': '775850aa15215aeb276dc4177d74205aaf2945428cfd5ebdd549e759f48929c7', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:24:54.550179 | orchestrator | skipping: [testbed-node-5] => (item={'id': '509aa81ca96a7ebbe45d0128d03fcfa8dd5e48295efbc294307c25ae62071098', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-22 20:24:54.550190 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2d9f61ab4016eeaeff3a8a997b410fa5227f60d071552e17ffc842d0b30d9ee2', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-22 20:24:54.550201 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8f883ed8c273514c30a7db330ac1825812199e70de252a35dc907067b21f92ae', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-22 20:24:54.550213 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6514fc04756aec4f8d65ba2a94a6c48e0a041fa349451c41ca0aef0ac91e5909', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-22 20:24:54.550224 | orchestrator | skipping: [testbed-node-5] => (item={'id': '12163fbd1fc7b2892625ed3e34d30cf79b627f5b38553161e67832dfd972d9e8', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-22 20:24:54.550243 | orchestrator | ok: [testbed-node-5] => (item={'id': '2631556ad061d53fb43692ba4358b3050cbab8f44a6ba8185f1ef036f3c46187', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-22 20:25:02.530214 | orchestrator | ok: [testbed-node-5] => (item={'id': '4e587ded1c17c6a8453bc9a1ae36dc8e740cccfdd7a96a8a4cb3270ee801ca54', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-22 20:25:02.530320 | orchestrator | skipping: [testbed-node-5] => (item={'id': '79b5be19b5241c6c2c8c7591351bd52d0cc2588141d05c8b230a3442e763a4cb', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-22 20:25:02.530335 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'fdb8bf2a394d1aea69ab26dacbbb488aa8e1b56bb8c0ce5c244ed7a9798a128c', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-22 20:25:02.530350 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ff73b8d542c3b356211e14daa9cc53622b0011cf17465573bcf6810d567c73ad', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-22 20:25:02.530378 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bc229ea26a98e2127767ebedde6b66462d44f3b3aa4555b9cb4df3b439680a3f', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-22 20:25:02.530389 | orchestrator | skipping: [testbed-node-5] => (item={'id': '342b13f90addba43f923a92155725b9b99b16660f1b40727ce639deb18e5e64e', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-22 20:25:02.530401 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'f9ee8566521bf04254493e25f5590bcf98cf20d955b06043a18a3395737a5458', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:25:02.530436 | orchestrator | 2025-06-22 20:25:02.530451 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-22 20:25:02.530463 | orchestrator | Sunday 22 June 2025 20:24:54 +0000 (0:00:00.464) 0:00:04.192 *********** 2025-06-22 20:25:02.530474 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:02.530531 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:02.530543 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:02.530554 | orchestrator | 2025-06-22 20:25:02.530565 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-22 20:25:02.530575 | orchestrator | Sunday 22 June 2025 20:24:54 +0000 (0:00:00.259) 0:00:04.452 *********** 2025-06-22 20:25:02.530586 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:02.530598 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:25:02.530609 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:25:02.530619 | orchestrator | 2025-06-22 20:25:02.530630 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-22 20:25:02.530641 | orchestrator | Sunday 22 June 2025 20:24:55 +0000 (0:00:00.347) 0:00:04.799 *********** 2025-06-22 20:25:02.530652 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:02.530662 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:02.530673 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:02.530686 | orchestrator | 2025-06-22 20:25:02.530700 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:25:02.530712 | orchestrator | Sunday 22 June 2025 20:24:55 +0000 (0:00:00.303) 0:00:05.102 *********** 2025-06-22 20:25:02.530725 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:02.530738 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:02.530750 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:02.530762 | orchestrator | 2025-06-22 20:25:02.530775 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-22 20:25:02.530788 | orchestrator | Sunday 22 June 2025 20:24:55 +0000 (0:00:00.253) 0:00:05.355 *********** 2025-06-22 20:25:02.530801 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-22 20:25:02.530815 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-22 20:25:02.530828 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:02.530840 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-22 20:25:02.530853 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-22 20:25:02.530865 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:25:02.530879 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-22 20:25:02.530891 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-22 20:25:02.530904 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:25:02.530916 | orchestrator | 2025-06-22 20:25:02.530929 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-22 20:25:02.530942 | orchestrator | Sunday 22 June 2025 20:24:55 +0000 (0:00:00.273) 0:00:05.629 *********** 2025-06-22 20:25:02.530954 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:02.530967 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:02.530980 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:02.530992 | orchestrator | 2025-06-22 20:25:02.531023 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-22 20:25:02.531035 | orchestrator | Sunday 22 June 2025 20:24:56 +0000 (0:00:00.472) 0:00:06.101 *********** 2025-06-22 20:25:02.531046 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:02.531057 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:25:02.531068 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:25:02.531079 | orchestrator | 2025-06-22 20:25:02.531090 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-22 20:25:02.531110 | orchestrator | Sunday 22 June 2025 20:24:56 +0000 (0:00:00.304) 0:00:06.406 *********** 2025-06-22 20:25:02.531121 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:02.531132 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:25:02.531143 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:25:02.531154 | orchestrator | 2025-06-22 20:25:02.531165 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-22 20:25:02.531176 | orchestrator | Sunday 22 June 2025 20:24:57 +0000 (0:00:00.298) 0:00:06.705 *********** 2025-06-22 20:25:02.531186 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:02.531197 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:02.531208 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:02.531219 | orchestrator | 2025-06-22 20:25:02.531230 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:25:02.531241 | orchestrator | Sunday 22 June 2025 20:24:57 +0000 (0:00:00.296) 0:00:07.001 *********** 2025-06-22 20:25:02.531251 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:02.531262 | orchestrator | 2025-06-22 20:25:02.531273 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:25:02.531284 | orchestrator | Sunday 22 June 2025 20:24:57 +0000 (0:00:00.601) 0:00:07.603 *********** 2025-06-22 20:25:02.531295 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:02.531311 | orchestrator | 2025-06-22 20:25:02.531322 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:25:02.531333 | orchestrator | Sunday 22 June 2025 20:24:58 +0000 (0:00:00.242) 0:00:07.845 *********** 2025-06-22 20:25:02.531344 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:02.531355 | orchestrator | 2025-06-22 20:25:02.531365 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:25:02.531376 | orchestrator | Sunday 22 June 2025 20:24:58 +0000 (0:00:00.250) 0:00:08.096 *********** 2025-06-22 20:25:02.531387 | orchestrator | 2025-06-22 20:25:02.531398 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:25:02.531409 | orchestrator | Sunday 22 June 2025 20:24:58 +0000 (0:00:00.067) 0:00:08.163 *********** 2025-06-22 20:25:02.531420 | orchestrator | 2025-06-22 20:25:02.531431 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:25:02.531442 | orchestrator | Sunday 22 June 2025 20:24:58 +0000 (0:00:00.066) 0:00:08.230 *********** 2025-06-22 20:25:02.531453 | orchestrator | 2025-06-22 20:25:02.531463 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:25:02.531474 | orchestrator | Sunday 22 June 2025 20:24:58 +0000 (0:00:00.067) 0:00:08.297 *********** 2025-06-22 20:25:02.531514 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:02.531526 | orchestrator | 2025-06-22 20:25:02.531536 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-22 20:25:02.531547 | orchestrator | Sunday 22 June 2025 20:24:58 +0000 (0:00:00.244) 0:00:08.542 *********** 2025-06-22 20:25:02.531558 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:02.531569 | orchestrator | 2025-06-22 20:25:02.531579 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:25:02.531590 | orchestrator | Sunday 22 June 2025 20:24:59 +0000 (0:00:00.233) 0:00:08.775 *********** 2025-06-22 20:25:02.531601 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:02.531612 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:02.531623 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:02.531633 | orchestrator | 2025-06-22 20:25:02.531644 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-22 20:25:02.531655 | orchestrator | Sunday 22 June 2025 20:24:59 +0000 (0:00:00.269) 0:00:09.045 *********** 2025-06-22 20:25:02.531666 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:02.531676 | orchestrator | 2025-06-22 20:25:02.531687 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-22 20:25:02.531698 | orchestrator | Sunday 22 June 2025 20:24:59 +0000 (0:00:00.606) 0:00:09.651 *********** 2025-06-22 20:25:02.531709 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:25:02.531727 | orchestrator | 2025-06-22 20:25:02.531738 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-22 20:25:02.531749 | orchestrator | Sunday 22 June 2025 20:25:01 +0000 (0:00:01.551) 0:00:11.203 *********** 2025-06-22 20:25:02.531760 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:02.531770 | orchestrator | 2025-06-22 20:25:02.531781 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-22 20:25:02.531792 | orchestrator | Sunday 22 June 2025 20:25:01 +0000 (0:00:00.126) 0:00:11.329 *********** 2025-06-22 20:25:02.531803 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:02.531814 | orchestrator | 2025-06-22 20:25:02.531824 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-22 20:25:02.531835 | orchestrator | Sunday 22 June 2025 20:25:01 +0000 (0:00:00.293) 0:00:11.622 *********** 2025-06-22 20:25:02.531846 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:02.531857 | orchestrator | 2025-06-22 20:25:02.531867 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-22 20:25:02.531878 | orchestrator | Sunday 22 June 2025 20:25:02 +0000 (0:00:00.151) 0:00:11.774 *********** 2025-06-22 20:25:02.531889 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:02.531900 | orchestrator | 2025-06-22 20:25:02.531911 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:25:02.531921 | orchestrator | Sunday 22 June 2025 20:25:02 +0000 (0:00:00.129) 0:00:11.903 *********** 2025-06-22 20:25:02.531932 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:02.531943 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:02.531954 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:02.531964 | orchestrator | 2025-06-22 20:25:02.531975 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-22 20:25:02.531993 | orchestrator | Sunday 22 June 2025 20:25:02 +0000 (0:00:00.282) 0:00:12.186 *********** 2025-06-22 20:25:14.308324 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:25:14.308427 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:25:14.308441 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:25:14.308452 | orchestrator | 2025-06-22 20:25:14.308465 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-22 20:25:14.308478 | orchestrator | Sunday 22 June 2025 20:25:05 +0000 (0:00:02.643) 0:00:14.829 *********** 2025-06-22 20:25:14.308526 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:14.308547 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:14.308563 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:14.308579 | orchestrator | 2025-06-22 20:25:14.308597 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-22 20:25:14.308614 | orchestrator | Sunday 22 June 2025 20:25:05 +0000 (0:00:00.291) 0:00:15.120 *********** 2025-06-22 20:25:14.308631 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:14.308649 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:14.308668 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:14.308686 | orchestrator | 2025-06-22 20:25:14.308704 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-22 20:25:14.308723 | orchestrator | Sunday 22 June 2025 20:25:05 +0000 (0:00:00.492) 0:00:15.612 *********** 2025-06-22 20:25:14.308740 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:14.308759 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:25:14.308778 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:25:14.308790 | orchestrator | 2025-06-22 20:25:14.308801 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-22 20:25:14.308812 | orchestrator | Sunday 22 June 2025 20:25:06 +0000 (0:00:00.303) 0:00:15.916 *********** 2025-06-22 20:25:14.308823 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:14.308834 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:14.308845 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:14.308855 | orchestrator | 2025-06-22 20:25:14.308866 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-22 20:25:14.308900 | orchestrator | Sunday 22 June 2025 20:25:06 +0000 (0:00:00.466) 0:00:16.382 *********** 2025-06-22 20:25:14.308911 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:14.308921 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:25:14.308932 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:25:14.308943 | orchestrator | 2025-06-22 20:25:14.308954 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-22 20:25:14.308964 | orchestrator | Sunday 22 June 2025 20:25:07 +0000 (0:00:00.285) 0:00:16.667 *********** 2025-06-22 20:25:14.308976 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:14.308987 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:25:14.308998 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:25:14.309008 | orchestrator | 2025-06-22 20:25:14.309019 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:25:14.309030 | orchestrator | Sunday 22 June 2025 20:25:07 +0000 (0:00:00.270) 0:00:16.938 *********** 2025-06-22 20:25:14.309040 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:14.309051 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:14.309062 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:14.309072 | orchestrator | 2025-06-22 20:25:14.309083 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-22 20:25:14.309094 | orchestrator | Sunday 22 June 2025 20:25:07 +0000 (0:00:00.505) 0:00:17.443 *********** 2025-06-22 20:25:14.309105 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:14.309115 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:14.309126 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:14.309136 | orchestrator | 2025-06-22 20:25:14.309147 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-22 20:25:14.309158 | orchestrator | Sunday 22 June 2025 20:25:08 +0000 (0:00:00.720) 0:00:18.163 *********** 2025-06-22 20:25:14.309168 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:14.309179 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:14.309190 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:14.309200 | orchestrator | 2025-06-22 20:25:14.309211 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-22 20:25:14.309222 | orchestrator | Sunday 22 June 2025 20:25:08 +0000 (0:00:00.316) 0:00:18.480 *********** 2025-06-22 20:25:14.309233 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:14.309243 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:25:14.309254 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:25:14.309265 | orchestrator | 2025-06-22 20:25:14.309275 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-22 20:25:14.309286 | orchestrator | Sunday 22 June 2025 20:25:09 +0000 (0:00:00.289) 0:00:18.770 *********** 2025-06-22 20:25:14.309297 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:25:14.309307 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:25:14.309318 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:25:14.309329 | orchestrator | 2025-06-22 20:25:14.309339 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-22 20:25:14.309356 | orchestrator | Sunday 22 June 2025 20:25:09 +0000 (0:00:00.299) 0:00:19.069 *********** 2025-06-22 20:25:14.309375 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:25:14.309392 | orchestrator | 2025-06-22 20:25:14.309409 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-22 20:25:14.309427 | orchestrator | Sunday 22 June 2025 20:25:10 +0000 (0:00:00.647) 0:00:19.716 *********** 2025-06-22 20:25:14.309445 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:25:14.309464 | orchestrator | 2025-06-22 20:25:14.309482 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:25:14.309526 | orchestrator | Sunday 22 June 2025 20:25:10 +0000 (0:00:00.256) 0:00:19.973 *********** 2025-06-22 20:25:14.309545 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:25:14.309561 | orchestrator | 2025-06-22 20:25:14.309577 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:25:14.309649 | orchestrator | Sunday 22 June 2025 20:25:11 +0000 (0:00:01.459) 0:00:21.432 *********** 2025-06-22 20:25:14.309667 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:25:14.309683 | orchestrator | 2025-06-22 20:25:14.309698 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:25:14.309714 | orchestrator | Sunday 22 June 2025 20:25:11 +0000 (0:00:00.228) 0:00:21.661 *********** 2025-06-22 20:25:14.309755 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:25:14.309772 | orchestrator | 2025-06-22 20:25:14.309788 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:25:14.309804 | orchestrator | Sunday 22 June 2025 20:25:12 +0000 (0:00:00.227) 0:00:21.888 *********** 2025-06-22 20:25:14.309820 | orchestrator | 2025-06-22 20:25:14.309836 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:25:14.309852 | orchestrator | Sunday 22 June 2025 20:25:12 +0000 (0:00:00.063) 0:00:21.951 *********** 2025-06-22 20:25:14.309868 | orchestrator | 2025-06-22 20:25:14.309884 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:25:14.309899 | orchestrator | Sunday 22 June 2025 20:25:12 +0000 (0:00:00.061) 0:00:22.013 *********** 2025-06-22 20:25:14.309916 | orchestrator | 2025-06-22 20:25:14.309931 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-22 20:25:14.309947 | orchestrator | Sunday 22 June 2025 20:25:12 +0000 (0:00:00.063) 0:00:22.077 *********** 2025-06-22 20:25:14.309963 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:25:14.309980 | orchestrator | 2025-06-22 20:25:14.309997 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:25:14.310015 | orchestrator | Sunday 22 June 2025 20:25:13 +0000 (0:00:01.119) 0:00:23.196 *********** 2025-06-22 20:25:14.310146 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-22 20:25:14.310165 | orchestrator |  "msg": [ 2025-06-22 20:25:14.310185 | orchestrator |  "Validator run completed.", 2025-06-22 20:25:14.310204 | orchestrator |  "You can find the report file here:", 2025-06-22 20:25:14.310216 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-22T20:24:51+00:00-report.json", 2025-06-22 20:25:14.310228 | orchestrator |  "on the following host:", 2025-06-22 20:25:14.310239 | orchestrator |  "testbed-manager" 2025-06-22 20:25:14.310250 | orchestrator |  ] 2025-06-22 20:25:14.310261 | orchestrator | } 2025-06-22 20:25:14.310272 | orchestrator | 2025-06-22 20:25:14.310283 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:25:14.310294 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-22 20:25:14.310307 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:25:14.310318 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:25:14.310328 | orchestrator | 2025-06-22 20:25:14.310340 | orchestrator | 2025-06-22 20:25:14.310351 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:25:14.310361 | orchestrator | Sunday 22 June 2025 20:25:13 +0000 (0:00:00.459) 0:00:23.655 *********** 2025-06-22 20:25:14.310372 | orchestrator | =============================================================================== 2025-06-22 20:25:14.310383 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.64s 2025-06-22 20:25:14.310393 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.55s 2025-06-22 20:25:14.310404 | orchestrator | Aggregate test results step one ----------------------------------------- 1.46s 2025-06-22 20:25:14.310414 | orchestrator | Write report file ------------------------------------------------------- 1.12s 2025-06-22 20:25:14.310435 | orchestrator | Create report output directory ------------------------------------------ 0.76s 2025-06-22 20:25:14.310446 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.72s 2025-06-22 20:25:14.310457 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.65s 2025-06-22 20:25:14.310468 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.61s 2025-06-22 20:25:14.310478 | orchestrator | Aggregate test results step one ----------------------------------------- 0.60s 2025-06-22 20:25:14.310516 | orchestrator | Get timestamp for report file ------------------------------------------- 0.59s 2025-06-22 20:25:14.310528 | orchestrator | Prepare test data ------------------------------------------------------- 0.51s 2025-06-22 20:25:14.310539 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2025-06-22 20:25:14.310550 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.47s 2025-06-22 20:25:14.310561 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.47s 2025-06-22 20:25:14.310571 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.46s 2025-06-22 20:25:14.310582 | orchestrator | Print report file information ------------------------------------------- 0.46s 2025-06-22 20:25:14.310593 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.45s 2025-06-22 20:25:14.310604 | orchestrator | Prepare test data ------------------------------------------------------- 0.35s 2025-06-22 20:25:14.310615 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.35s 2025-06-22 20:25:14.310626 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.33s 2025-06-22 20:25:14.591635 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-22 20:25:14.597156 | orchestrator | + set -e 2025-06-22 20:25:14.597219 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 20:25:14.597232 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 20:25:14.597244 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 20:25:14.597255 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 20:25:14.597265 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 20:25:14.597277 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 20:25:14.597289 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 20:25:14.597300 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-22 20:25:14.597311 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-22 20:25:14.597322 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 20:25:14.597332 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 20:25:14.597343 | orchestrator | ++ export ARA=false 2025-06-22 20:25:14.597354 | orchestrator | ++ ARA=false 2025-06-22 20:25:14.597364 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 20:25:14.597375 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 20:25:14.597385 | orchestrator | ++ export TEMPEST=false 2025-06-22 20:25:14.597396 | orchestrator | ++ TEMPEST=false 2025-06-22 20:25:14.597407 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 20:25:14.597417 | orchestrator | ++ IS_ZUUL=true 2025-06-22 20:25:14.597428 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.19 2025-06-22 20:25:14.597439 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.19 2025-06-22 20:25:14.597449 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 20:25:14.597460 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 20:25:14.597471 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 20:25:14.597481 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 20:25:14.597517 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 20:25:14.597529 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 20:25:14.597539 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 20:25:14.597550 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 20:25:14.597561 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-22 20:25:14.597572 | orchestrator | + source /etc/os-release 2025-06-22 20:25:14.597582 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-22 20:25:14.597593 | orchestrator | ++ NAME=Ubuntu 2025-06-22 20:25:14.597604 | orchestrator | ++ VERSION_ID=24.04 2025-06-22 20:25:14.597614 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-22 20:25:14.597625 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-22 20:25:14.597636 | orchestrator | ++ ID=ubuntu 2025-06-22 20:25:14.597647 | orchestrator | ++ ID_LIKE=debian 2025-06-22 20:25:14.597671 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-22 20:25:14.597682 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-22 20:25:14.597728 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-22 20:25:14.597749 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-22 20:25:14.597772 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-22 20:25:14.597792 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-22 20:25:14.597825 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-22 20:25:14.597855 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-22 20:25:14.597869 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-22 20:25:14.624179 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-22 20:25:36.648468 | orchestrator | 2025-06-22 20:25:36.648559 | orchestrator | # Status of Elasticsearch 2025-06-22 20:25:36.648567 | orchestrator | 2025-06-22 20:25:36.648573 | orchestrator | + pushd /opt/configuration/contrib 2025-06-22 20:25:36.648579 | orchestrator | + echo 2025-06-22 20:25:36.648584 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-22 20:25:36.648589 | orchestrator | + echo 2025-06-22 20:25:36.648594 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-22 20:25:36.838992 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-22 20:25:36.839093 | orchestrator | 2025-06-22 20:25:36.839109 | orchestrator | # Status of MariaDB 2025-06-22 20:25:36.839123 | orchestrator | 2025-06-22 20:25:36.839135 | orchestrator | + echo 2025-06-22 20:25:36.839146 | orchestrator | + echo '# Status of MariaDB' 2025-06-22 20:25:36.839157 | orchestrator | + echo 2025-06-22 20:25:36.839168 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-22 20:25:36.839180 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-22 20:25:36.906247 | orchestrator | Reading package lists... 2025-06-22 20:25:37.212951 | orchestrator | Building dependency tree... 2025-06-22 20:25:37.213332 | orchestrator | Reading state information... 2025-06-22 20:25:37.561251 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-22 20:25:37.561351 | orchestrator | bc set to manually installed. 2025-06-22 20:25:37.561366 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-22 20:25:38.195839 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-22 20:25:38.196926 | orchestrator | 2025-06-22 20:25:38.196956 | orchestrator | # Status of Prometheus 2025-06-22 20:25:38.196970 | orchestrator | 2025-06-22 20:25:38.196982 | orchestrator | + echo 2025-06-22 20:25:38.197085 | orchestrator | + echo '# Status of Prometheus' 2025-06-22 20:25:38.197098 | orchestrator | + echo 2025-06-22 20:25:38.197109 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-22 20:25:38.262828 | orchestrator | Unauthorized 2025-06-22 20:25:38.266353 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-22 20:25:38.324292 | orchestrator | Unauthorized 2025-06-22 20:25:38.327634 | orchestrator | 2025-06-22 20:25:38.327675 | orchestrator | # Status of RabbitMQ 2025-06-22 20:25:38.327688 | orchestrator | 2025-06-22 20:25:38.327700 | orchestrator | + echo 2025-06-22 20:25:38.327711 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-22 20:25:38.327722 | orchestrator | + echo 2025-06-22 20:25:38.327734 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-22 20:25:38.778148 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-22 20:25:38.786969 | orchestrator | 2025-06-22 20:25:38.787039 | orchestrator | # Status of Redis 2025-06-22 20:25:38.787052 | orchestrator | 2025-06-22 20:25:38.787064 | orchestrator | + echo 2025-06-22 20:25:38.787076 | orchestrator | + echo '# Status of Redis' 2025-06-22 20:25:38.787088 | orchestrator | + echo 2025-06-22 20:25:38.787101 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-22 20:25:38.792627 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001773s;;;0.000000;10.000000 2025-06-22 20:25:38.792993 | orchestrator | + popd 2025-06-22 20:25:38.793314 | orchestrator | 2025-06-22 20:25:38.793391 | orchestrator | # Create backup of MariaDB database 2025-06-22 20:25:38.793407 | orchestrator | 2025-06-22 20:25:38.793419 | orchestrator | + echo 2025-06-22 20:25:38.793431 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-22 20:25:38.793443 | orchestrator | + echo 2025-06-22 20:25:38.793454 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-22 20:25:40.621303 | orchestrator | 2025-06-22 20:25:40 | INFO  | Task abf5705d-824c-48cd-99d4-7ab572160c73 (mariadb_backup) was prepared for execution. 2025-06-22 20:25:40.621359 | orchestrator | 2025-06-22 20:25:40 | INFO  | It takes a moment until task abf5705d-824c-48cd-99d4-7ab572160c73 (mariadb_backup) has been started and output is visible here. 2025-06-22 20:26:42.923003 | orchestrator | 2025-06-22 20:26:42.923141 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:26:42.923169 | orchestrator | 2025-06-22 20:26:42.923189 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:26:42.923208 | orchestrator | Sunday 22 June 2025 20:25:44 +0000 (0:00:00.196) 0:00:00.196 *********** 2025-06-22 20:26:42.923227 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:26:42.923248 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:26:42.923266 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:26:42.923284 | orchestrator | 2025-06-22 20:26:42.923303 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:26:42.923323 | orchestrator | Sunday 22 June 2025 20:25:44 +0000 (0:00:00.322) 0:00:00.518 *********** 2025-06-22 20:26:42.923341 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-22 20:26:42.923361 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-22 20:26:42.923372 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-22 20:26:42.923383 | orchestrator | 2025-06-22 20:26:42.923394 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-22 20:26:42.923405 | orchestrator | 2025-06-22 20:26:42.923415 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-22 20:26:42.923426 | orchestrator | Sunday 22 June 2025 20:25:44 +0000 (0:00:00.568) 0:00:01.086 *********** 2025-06-22 20:26:42.923437 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 20:26:42.923448 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-22 20:26:42.923459 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-22 20:26:42.923470 | orchestrator | 2025-06-22 20:26:42.923480 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 20:26:42.923491 | orchestrator | Sunday 22 June 2025 20:25:45 +0000 (0:00:00.396) 0:00:01.483 *********** 2025-06-22 20:26:42.923503 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:26:42.923554 | orchestrator | 2025-06-22 20:26:42.923567 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-22 20:26:42.923596 | orchestrator | Sunday 22 June 2025 20:25:45 +0000 (0:00:00.536) 0:00:02.019 *********** 2025-06-22 20:26:42.923609 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:26:42.923621 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:26:42.923633 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:26:42.923645 | orchestrator | 2025-06-22 20:26:42.923658 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-22 20:26:42.923671 | orchestrator | Sunday 22 June 2025 20:25:49 +0000 (0:00:03.169) 0:00:05.189 *********** 2025-06-22 20:26:42.923684 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-22 20:26:42.923696 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-22 20:26:42.923709 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-22 20:26:42.923721 | orchestrator | mariadb_bootstrap_restart 2025-06-22 20:26:42.923739 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:26:42.923789 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:26:42.923811 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:26:42.923831 | orchestrator | 2025-06-22 20:26:42.923851 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-22 20:26:42.923868 | orchestrator | skipping: no hosts matched 2025-06-22 20:26:42.923880 | orchestrator | 2025-06-22 20:26:42.923891 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-22 20:26:42.923902 | orchestrator | skipping: no hosts matched 2025-06-22 20:26:42.923912 | orchestrator | 2025-06-22 20:26:42.923923 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-22 20:26:42.923934 | orchestrator | skipping: no hosts matched 2025-06-22 20:26:42.923945 | orchestrator | 2025-06-22 20:26:42.923956 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-22 20:26:42.923967 | orchestrator | 2025-06-22 20:26:42.923982 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-22 20:26:42.924001 | orchestrator | Sunday 22 June 2025 20:26:41 +0000 (0:00:52.886) 0:00:58.075 *********** 2025-06-22 20:26:42.924020 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:26:42.924038 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:26:42.924143 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:26:42.924166 | orchestrator | 2025-06-22 20:26:42.924186 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-22 20:26:42.924203 | orchestrator | Sunday 22 June 2025 20:26:42 +0000 (0:00:00.299) 0:00:58.375 *********** 2025-06-22 20:26:42.924223 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:26:42.924242 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:26:42.924260 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:26:42.924278 | orchestrator | 2025-06-22 20:26:42.924297 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:26:42.924319 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:26:42.924332 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 20:26:42.924344 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 20:26:42.924354 | orchestrator | 2025-06-22 20:26:42.924365 | orchestrator | 2025-06-22 20:26:42.924376 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:26:42.924387 | orchestrator | Sunday 22 June 2025 20:26:42 +0000 (0:00:00.393) 0:00:58.768 *********** 2025-06-22 20:26:42.924398 | orchestrator | =============================================================================== 2025-06-22 20:26:42.924409 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 52.89s 2025-06-22 20:26:42.924442 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.17s 2025-06-22 20:26:42.924454 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-06-22 20:26:42.924465 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.54s 2025-06-22 20:26:42.924475 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2025-06-22 20:26:42.924486 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.39s 2025-06-22 20:26:42.924497 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2025-06-22 20:26:42.924533 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.30s 2025-06-22 20:26:43.171745 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-22 20:26:43.177286 | orchestrator | + set -e 2025-06-22 20:26:43.177363 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 20:26:43.177387 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 20:26:43.177409 | orchestrator | ++ INTERACTIVE=false 2025-06-22 20:26:43.177428 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 20:26:43.177479 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 20:26:43.177498 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-22 20:26:43.178375 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-22 20:26:43.184730 | orchestrator | 2025-06-22 20:26:43.184777 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-22 20:26:43.184790 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-22 20:26:43.184801 | orchestrator | + export OS_CLOUD=admin 2025-06-22 20:26:43.184811 | orchestrator | + OS_CLOUD=admin 2025-06-22 20:26:43.184822 | orchestrator | + echo 2025-06-22 20:26:43.184833 | orchestrator | # OpenStack endpoints 2025-06-22 20:26:43.184844 | orchestrator | 2025-06-22 20:26:43.184855 | orchestrator | + echo '# OpenStack endpoints' 2025-06-22 20:26:43.184865 | orchestrator | + echo 2025-06-22 20:26:43.184876 | orchestrator | + openstack endpoint list 2025-06-22 20:26:46.774098 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-22 20:26:46.774238 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-22 20:26:46.774261 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-22 20:26:46.774280 | orchestrator | | 036ffa5cd1494491902f178262458d77 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-22 20:26:46.774299 | orchestrator | | 0477cc77f6744f16b5cd9512dbcfde95 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-22 20:26:46.774318 | orchestrator | | 06133288207c4234917f9e6b23df6bf6 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-22 20:26:46.774336 | orchestrator | | 16fea927f3f240f0a1316c279ca9ac91 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-22 20:26:46.774355 | orchestrator | | 18b92b65cd7c44a2b508a727729b34c8 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-22 20:26:46.774374 | orchestrator | | 2d1793061d4647f6a54ec64aea09e101 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-22 20:26:46.774393 | orchestrator | | 42a4f4206f3a4f18aa0177272d870e89 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-22 20:26:46.774412 | orchestrator | | 45d5c011adee4ab188e954b751e7654c | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-22 20:26:46.774453 | orchestrator | | 4bae5d563ec840c4b35923fd77931a7b | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-22 20:26:46.774466 | orchestrator | | 690d353df68c41bdae512a8e4bc7238d | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-22 20:26:46.774477 | orchestrator | | 8922b2b0071844a490c76a3254399a4e | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-22 20:26:46.774488 | orchestrator | | 8fd811e12e504ff1a0a5512913f4e645 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-22 20:26:46.774500 | orchestrator | | 9bd0d8db301648589450dc93dbba564c | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-22 20:26:46.774541 | orchestrator | | a47d085c70c146a1acf4a3324436dfe9 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-22 20:26:46.774632 | orchestrator | | a4ed6b3ec5984f0b9e6dcc73c57f89ef | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-22 20:26:46.774650 | orchestrator | | aa7d8d5372134ae09f7818c6a466fe8c | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-22 20:26:46.774662 | orchestrator | | af3564ee07c24fc784a87dfcc07de4cc | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-22 20:26:46.774675 | orchestrator | | ce675b5d0ea94afe9796561eba81cf79 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-22 20:26:46.774687 | orchestrator | | cf926d2e3e364c7993a5b0471257f5cb | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-22 20:26:46.774700 | orchestrator | | da3fedace650408d9c5a40e469fd6e71 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-22 20:26:46.774733 | orchestrator | | e5aaf5525c694c1bb87ba4012ded941c | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-22 20:26:46.774746 | orchestrator | | ef2e85f53a2949689496be3343a4a69c | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-22 20:26:46.774766 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-22 20:26:46.990628 | orchestrator | 2025-06-22 20:26:46.990723 | orchestrator | # Cinder 2025-06-22 20:26:46.990737 | orchestrator | 2025-06-22 20:26:46.990748 | orchestrator | + echo 2025-06-22 20:26:46.990759 | orchestrator | + echo '# Cinder' 2025-06-22 20:26:46.990770 | orchestrator | + echo 2025-06-22 20:26:46.990781 | orchestrator | + openstack volume service list 2025-06-22 20:26:50.708166 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-22 20:26:50.708283 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-22 20:26:50.708305 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-22 20:26:50.708324 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-22T20:26:45.000000 | 2025-06-22 20:26:50.708342 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-22T20:26:46.000000 | 2025-06-22 20:26:50.708359 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-22T20:26:45.000000 | 2025-06-22 20:26:50.708378 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-22T20:26:45.000000 | 2025-06-22 20:26:50.708398 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-22T20:26:47.000000 | 2025-06-22 20:26:50.708417 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-22T20:26:48.000000 | 2025-06-22 20:26:50.708436 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-22T20:26:45.000000 | 2025-06-22 20:26:50.708448 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-22T20:26:46.000000 | 2025-06-22 20:26:50.708459 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-22T20:26:46.000000 | 2025-06-22 20:26:50.708470 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-22 20:26:50.950750 | orchestrator | 2025-06-22 20:26:50.950936 | orchestrator | # Neutron 2025-06-22 20:26:50.950954 | orchestrator | 2025-06-22 20:26:50.950967 | orchestrator | + echo 2025-06-22 20:26:50.950978 | orchestrator | + echo '# Neutron' 2025-06-22 20:26:50.950990 | orchestrator | + echo 2025-06-22 20:26:50.951001 | orchestrator | + openstack network agent list 2025-06-22 20:26:54.045176 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-22 20:26:54.045361 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-22 20:26:54.045379 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-22 20:26:54.045392 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-22 20:26:54.045403 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-22 20:26:54.045414 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-22 20:26:54.046284 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-22 20:26:54.046319 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-22 20:26:54.046331 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-22 20:26:54.046342 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-22 20:26:54.046353 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-22 20:26:54.046364 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-22 20:26:54.046375 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-22 20:26:54.314193 | orchestrator | + openstack network service provider list 2025-06-22 20:26:57.034700 | orchestrator | +---------------+------+---------+ 2025-06-22 20:26:57.034857 | orchestrator | | Service Type | Name | Default | 2025-06-22 20:26:57.034874 | orchestrator | +---------------+------+---------+ 2025-06-22 20:26:57.034886 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-22 20:26:57.034897 | orchestrator | +---------------+------+---------+ 2025-06-22 20:26:57.272696 | orchestrator | 2025-06-22 20:26:57.272765 | orchestrator | # Nova 2025-06-22 20:26:57.272778 | orchestrator | 2025-06-22 20:26:57.272789 | orchestrator | + echo 2025-06-22 20:26:57.272801 | orchestrator | + echo '# Nova' 2025-06-22 20:26:57.272811 | orchestrator | + echo 2025-06-22 20:26:57.272823 | orchestrator | + openstack compute service list 2025-06-22 20:27:00.420497 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-22 20:27:00.420687 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-22 20:27:00.420704 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-22 20:27:00.420716 | orchestrator | | 7f4d2fdd-898d-4129-b8f7-3cb565b8c0ef | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-22T20:26:57.000000 | 2025-06-22 20:27:00.420728 | orchestrator | | 1a6cc89d-ecf8-4a28-9b69-432f682ef855 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-22T20:26:52.000000 | 2025-06-22 20:27:00.420762 | orchestrator | | fadeb832-9396-4f01-8ec9-b006e81fb2ef | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-22T20:26:53.000000 | 2025-06-22 20:27:00.420774 | orchestrator | | 5ba8a6c4-65af-4b0d-9fb8-b67a4f7796b9 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-22T20:26:57.000000 | 2025-06-22 20:27:00.420784 | orchestrator | | b171cdb6-84ab-4802-8d84-2d1d0b3819d9 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-22T20:26:59.000000 | 2025-06-22 20:27:00.420795 | orchestrator | | 11837da0-bb42-4d1f-b27b-b99c16fd31a4 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-22T20:26:59.000000 | 2025-06-22 20:27:00.420806 | orchestrator | | 32a43777-f584-484a-924d-dae4cf3ca2aa | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-22T20:26:55.000000 | 2025-06-22 20:27:00.420816 | orchestrator | | 84cafe2a-546d-4d8a-89b7-1551e9b94af6 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-22T20:26:57.000000 | 2025-06-22 20:27:00.420827 | orchestrator | | 3840b838-05f6-4261-86c2-2ed319326da6 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-22T20:26:57.000000 | 2025-06-22 20:27:00.420838 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-22 20:27:00.665895 | orchestrator | + openstack hypervisor list 2025-06-22 20:27:04.888462 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-22 20:27:04.888625 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-22 20:27:04.888642 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-22 20:27:04.888654 | orchestrator | | 8710d31c-14ae-4b6c-911b-d6dd9446cd1c | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-22 20:27:04.888665 | orchestrator | | a2c796e9-aa88-413d-b41d-3abba07c53f0 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-22 20:27:04.888675 | orchestrator | | d61a7f4c-6a61-457d-9b9c-d754f7f0e7c5 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-22 20:27:04.888687 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-22 20:27:05.131175 | orchestrator | 2025-06-22 20:27:05.131299 | orchestrator | # Run OpenStack test play 2025-06-22 20:27:05.131330 | orchestrator | 2025-06-22 20:27:05.131343 | orchestrator | + echo 2025-06-22 20:27:05.131355 | orchestrator | + echo '# Run OpenStack test play' 2025-06-22 20:27:05.131381 | orchestrator | + echo 2025-06-22 20:27:05.132011 | orchestrator | + osism apply --environment openstack test 2025-06-22 20:27:06.892905 | orchestrator | 2025-06-22 20:27:06 | INFO  | Trying to run play test in environment openstack 2025-06-22 20:27:06.897272 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:27:06.897321 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:27:06.897341 | orchestrator | Registering Redlock._release_script 2025-06-22 20:27:06.957435 | orchestrator | 2025-06-22 20:27:06 | INFO  | Task c15f31a1-bb64-4b5f-a2bf-923499f6e2e9 (test) was prepared for execution. 2025-06-22 20:27:06.957513 | orchestrator | 2025-06-22 20:27:06 | INFO  | It takes a moment until task c15f31a1-bb64-4b5f-a2bf-923499f6e2e9 (test) has been started and output is visible here. 2025-06-22 20:33:06.345529 | orchestrator | 2025-06-22 20:33:06.345648 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-22 20:33:06.345665 | orchestrator | 2025-06-22 20:33:06.345676 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-22 20:33:06.345688 | orchestrator | Sunday 22 June 2025 20:27:10 +0000 (0:00:00.075) 0:00:00.075 *********** 2025-06-22 20:33:06.345705 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.345784 | orchestrator | 2025-06-22 20:33:06.345800 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-22 20:33:06.345820 | orchestrator | Sunday 22 June 2025 20:27:14 +0000 (0:00:03.519) 0:00:03.594 *********** 2025-06-22 20:33:06.345838 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.345856 | orchestrator | 2025-06-22 20:33:06.345899 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-22 20:33:06.345919 | orchestrator | Sunday 22 June 2025 20:27:18 +0000 (0:00:04.041) 0:00:07.635 *********** 2025-06-22 20:33:06.345937 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.345948 | orchestrator | 2025-06-22 20:33:06.345959 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-22 20:33:06.345970 | orchestrator | Sunday 22 June 2025 20:27:24 +0000 (0:00:05.862) 0:00:13.498 *********** 2025-06-22 20:33:06.346003 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.346097 | orchestrator | 2025-06-22 20:33:06.346114 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-22 20:33:06.346126 | orchestrator | Sunday 22 June 2025 20:27:27 +0000 (0:00:03.755) 0:00:17.253 *********** 2025-06-22 20:33:06.346138 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.346165 | orchestrator | 2025-06-22 20:33:06.346178 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-22 20:33:06.346202 | orchestrator | Sunday 22 June 2025 20:27:32 +0000 (0:00:04.041) 0:00:21.295 *********** 2025-06-22 20:33:06.346216 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-22 20:33:06.346229 | orchestrator | changed: [localhost] => (item=member) 2025-06-22 20:33:06.346243 | orchestrator | changed: [localhost] => (item=creator) 2025-06-22 20:33:06.346255 | orchestrator | 2025-06-22 20:33:06.346267 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-22 20:33:06.346280 | orchestrator | Sunday 22 June 2025 20:27:43 +0000 (0:00:11.645) 0:00:32.941 *********** 2025-06-22 20:33:06.346292 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.346304 | orchestrator | 2025-06-22 20:33:06.346316 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-22 20:33:06.346329 | orchestrator | Sunday 22 June 2025 20:27:48 +0000 (0:00:04.785) 0:00:37.726 *********** 2025-06-22 20:33:06.346341 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.346353 | orchestrator | 2025-06-22 20:33:06.346365 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-22 20:33:06.346377 | orchestrator | Sunday 22 June 2025 20:27:53 +0000 (0:00:05.095) 0:00:42.822 *********** 2025-06-22 20:33:06.346398 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.346418 | orchestrator | 2025-06-22 20:33:06.346436 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-22 20:33:06.346456 | orchestrator | Sunday 22 June 2025 20:27:57 +0000 (0:00:04.346) 0:00:47.169 *********** 2025-06-22 20:33:06.346467 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.346478 | orchestrator | 2025-06-22 20:33:06.346519 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-22 20:33:06.346532 | orchestrator | Sunday 22 June 2025 20:28:02 +0000 (0:00:04.346) 0:00:51.516 *********** 2025-06-22 20:33:06.346543 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.346554 | orchestrator | 2025-06-22 20:33:06.346565 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-22 20:33:06.346575 | orchestrator | Sunday 22 June 2025 20:28:06 +0000 (0:00:03.844) 0:00:55.361 *********** 2025-06-22 20:33:06.346586 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.346597 | orchestrator | 2025-06-22 20:33:06.346608 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-22 20:33:06.346618 | orchestrator | Sunday 22 June 2025 20:28:10 +0000 (0:00:03.942) 0:00:59.303 *********** 2025-06-22 20:33:06.346629 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.346642 | orchestrator | 2025-06-22 20:33:06.346661 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-22 20:33:06.346680 | orchestrator | Sunday 22 June 2025 20:28:26 +0000 (0:00:16.145) 0:01:15.449 *********** 2025-06-22 20:33:06.346699 | orchestrator | changed: [localhost] => (item=test) 2025-06-22 20:33:06.346718 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-22 20:33:06.346736 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-22 20:33:06.346755 | orchestrator | 2025-06-22 20:33:06.346787 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-22 20:33:06.346806 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-22 20:33:06.346826 | orchestrator | 2025-06-22 20:33:06.346845 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-22 20:33:06.346862 | orchestrator | 2025-06-22 20:33:06.346883 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-22 20:33:06.346900 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-22 20:33:06.346920 | orchestrator | 2025-06-22 20:33:06.346939 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-22 20:33:06.346957 | orchestrator | Sunday 22 June 2025 20:31:46 +0000 (0:03:19.924) 0:04:35.373 *********** 2025-06-22 20:33:06.346973 | orchestrator | changed: [localhost] => (item=test) 2025-06-22 20:33:06.346991 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-22 20:33:06.347009 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-22 20:33:06.347027 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-22 20:33:06.347046 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-22 20:33:06.347065 | orchestrator | 2025-06-22 20:33:06.347084 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-22 20:33:06.347102 | orchestrator | Sunday 22 June 2025 20:32:09 +0000 (0:00:23.505) 0:04:58.879 *********** 2025-06-22 20:33:06.347121 | orchestrator | changed: [localhost] => (item=test) 2025-06-22 20:33:06.347140 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-22 20:33:06.347182 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-22 20:33:06.347203 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-22 20:33:06.347221 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-22 20:33:06.347241 | orchestrator | 2025-06-22 20:33:06.347259 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-22 20:33:06.347277 | orchestrator | Sunday 22 June 2025 20:32:41 +0000 (0:00:31.394) 0:05:30.273 *********** 2025-06-22 20:33:06.347297 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.347321 | orchestrator | 2025-06-22 20:33:06.347339 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-22 20:33:06.347357 | orchestrator | Sunday 22 June 2025 20:32:47 +0000 (0:00:06.603) 0:05:36.877 *********** 2025-06-22 20:33:06.347376 | orchestrator | changed: [localhost] 2025-06-22 20:33:06.347396 | orchestrator | 2025-06-22 20:33:06.347415 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-22 20:33:06.347436 | orchestrator | Sunday 22 June 2025 20:33:01 +0000 (0:00:13.494) 0:05:50.371 *********** 2025-06-22 20:33:06.347454 | orchestrator | ok: [localhost] 2025-06-22 20:33:06.347473 | orchestrator | 2025-06-22 20:33:06.347560 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-22 20:33:06.347582 | orchestrator | Sunday 22 June 2025 20:33:06 +0000 (0:00:04.970) 0:05:55.342 *********** 2025-06-22 20:33:06.347602 | orchestrator | ok: [localhost] => { 2025-06-22 20:33:06.347622 | orchestrator |  "msg": "192.168.112.181" 2025-06-22 20:33:06.347640 | orchestrator | } 2025-06-22 20:33:06.347660 | orchestrator | 2025-06-22 20:33:06.347680 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:33:06.347701 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:33:06.347721 | orchestrator | 2025-06-22 20:33:06.347741 | orchestrator | 2025-06-22 20:33:06.347759 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:33:06.347777 | orchestrator | Sunday 22 June 2025 20:33:06 +0000 (0:00:00.045) 0:05:55.388 *********** 2025-06-22 20:33:06.347797 | orchestrator | =============================================================================== 2025-06-22 20:33:06.347816 | orchestrator | Create test instances ------------------------------------------------- 199.92s 2025-06-22 20:33:06.347835 | orchestrator | Add tag to instances --------------------------------------------------- 31.39s 2025-06-22 20:33:06.347868 | orchestrator | Add metadata to instances ---------------------------------------------- 23.51s 2025-06-22 20:33:06.347888 | orchestrator | Create test network topology ------------------------------------------- 16.15s 2025-06-22 20:33:06.347909 | orchestrator | Attach test volume ----------------------------------------------------- 13.49s 2025-06-22 20:33:06.347927 | orchestrator | Add member roles to user test ------------------------------------------ 11.65s 2025-06-22 20:33:06.347946 | orchestrator | Create test volume ------------------------------------------------------ 6.60s 2025-06-22 20:33:06.347966 | orchestrator | Add manager role to user test-admin ------------------------------------- 5.86s 2025-06-22 20:33:06.347984 | orchestrator | Create ssh security group ----------------------------------------------- 5.10s 2025-06-22 20:33:06.348003 | orchestrator | Create floating ip address ---------------------------------------------- 4.97s 2025-06-22 20:33:06.348022 | orchestrator | Create test server group ------------------------------------------------ 4.79s 2025-06-22 20:33:06.348039 | orchestrator | Create icmp security group ---------------------------------------------- 4.35s 2025-06-22 20:33:06.348058 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.35s 2025-06-22 20:33:06.348076 | orchestrator | Create test user -------------------------------------------------------- 4.04s 2025-06-22 20:33:06.348095 | orchestrator | Create test-admin user -------------------------------------------------- 4.04s 2025-06-22 20:33:06.348114 | orchestrator | Create test keypair ----------------------------------------------------- 3.94s 2025-06-22 20:33:06.348133 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.84s 2025-06-22 20:33:06.348153 | orchestrator | Create test project ----------------------------------------------------- 3.76s 2025-06-22 20:33:06.348171 | orchestrator | Create test domain ------------------------------------------------------ 3.52s 2025-06-22 20:33:06.348191 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-06-22 20:33:06.592427 | orchestrator | + server_list 2025-06-22 20:33:06.592657 | orchestrator | + openstack --os-cloud test server list 2025-06-22 20:33:10.302421 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-22 20:33:10.302552 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-22 20:33:10.302568 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-22 20:33:10.302581 | orchestrator | | 4b631183-c0eb-4650-bacc-842badfc9feb | test-4 | ACTIVE | auto_allocated_network=10.42.0.52, 192.168.112.132 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:33:10.302614 | orchestrator | | 4583a6cf-5c83-4597-a171-f5b267056356 | test-3 | ACTIVE | auto_allocated_network=10.42.0.43, 192.168.112.140 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:33:10.302625 | orchestrator | | 94464195-04d7-4e2b-91a0-afb05e0ca303 | test-2 | ACTIVE | auto_allocated_network=10.42.0.20, 192.168.112.187 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:33:10.302636 | orchestrator | | ace60722-23fb-4f99-b6f1-36be2eace746 | test-1 | ACTIVE | auto_allocated_network=10.42.0.50, 192.168.112.139 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:33:10.302647 | orchestrator | | a7660aba-175a-44bf-b22e-4dca3c523cdd | test | ACTIVE | auto_allocated_network=10.42.0.25, 192.168.112.181 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:33:10.302658 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-22 20:33:10.536584 | orchestrator | + openstack --os-cloud test server show test 2025-06-22 20:33:13.941167 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:13.941266 | orchestrator | | Field | Value | 2025-06-22 20:33:13.941306 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:13.941319 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:33:13.941330 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:33:13.941341 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:33:13.941352 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-22 20:33:13.941363 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:33:13.941374 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:33:13.941385 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:33:13.941397 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:33:13.941429 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:33:13.941459 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:33:13.941471 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:33:13.941525 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:33:13.941548 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:33:13.941566 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:33:13.941583 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:33:13.941595 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:28:56.000000 | 2025-06-22 20:33:13.941606 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:33:13.941617 | orchestrator | | accessIPv4 | | 2025-06-22 20:33:13.941628 | orchestrator | | accessIPv6 | | 2025-06-22 20:33:13.941639 | orchestrator | | addresses | auto_allocated_network=10.42.0.25, 192.168.112.181 | 2025-06-22 20:33:13.941666 | orchestrator | | config_drive | | 2025-06-22 20:33:13.941678 | orchestrator | | created | 2025-06-22T20:28:34Z | 2025-06-22 20:33:13.941696 | orchestrator | | description | None | 2025-06-22 20:33:13.941709 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:33:13.941722 | orchestrator | | hostId | c2d9a95d5535bfebc904becb342bc3db263d57e63675fdaadc0b56c2 | 2025-06-22 20:33:13.941734 | orchestrator | | host_status | None | 2025-06-22 20:33:13.941747 | orchestrator | | id | a7660aba-175a-44bf-b22e-4dca3c523cdd | 2025-06-22 20:33:13.941759 | orchestrator | | image | Cirros 0.6.2 (8559e0d9-8782-4298-b29b-da8316be5805) | 2025-06-22 20:33:13.941771 | orchestrator | | key_name | test | 2025-06-22 20:33:13.941784 | orchestrator | | locked | False | 2025-06-22 20:33:13.941797 | orchestrator | | locked_reason | None | 2025-06-22 20:33:13.941817 | orchestrator | | name | test | 2025-06-22 20:33:13.941837 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:33:13.941850 | orchestrator | | progress | 0 | 2025-06-22 20:33:13.941867 | orchestrator | | project_id | a6acdb45d80946e888051831eedb7adc | 2025-06-22 20:33:13.941880 | orchestrator | | properties | hostname='test' | 2025-06-22 20:33:13.941892 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:33:13.941905 | orchestrator | | | name='ssh' | 2025-06-22 20:33:13.941918 | orchestrator | | server_groups | None | 2025-06-22 20:33:13.941931 | orchestrator | | status | ACTIVE | 2025-06-22 20:33:13.941943 | orchestrator | | tags | test | 2025-06-22 20:33:13.941956 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:33:13.941973 | orchestrator | | updated | 2025-06-22T20:31:50Z | 2025-06-22 20:33:13.941990 | orchestrator | | user_id | 830e73c4cdf34bcc8314a4956c5e7a6b | 2025-06-22 20:33:13.942001 | orchestrator | | volumes_attached | delete_on_termination='False', id='c144d295-d9b9-4028-9afb-32c8a1f6bdab' | 2025-06-22 20:33:13.945005 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:14.221321 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-22 20:33:17.578409 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:17.578482 | orchestrator | | Field | Value | 2025-06-22 20:33:17.578507 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:17.578514 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:33:17.578526 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:33:17.578530 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:33:17.578546 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-22 20:33:17.578550 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:33:17.578554 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:33:17.578558 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:33:17.578562 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:33:17.578578 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:33:17.578582 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:33:17.578586 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:33:17.578590 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:33:17.578594 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:33:17.578598 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:33:17.578605 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:33:17.578609 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:29:40.000000 | 2025-06-22 20:33:17.578613 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:33:17.578617 | orchestrator | | accessIPv4 | | 2025-06-22 20:33:17.578621 | orchestrator | | accessIPv6 | | 2025-06-22 20:33:17.578625 | orchestrator | | addresses | auto_allocated_network=10.42.0.50, 192.168.112.139 | 2025-06-22 20:33:17.578634 | orchestrator | | config_drive | | 2025-06-22 20:33:17.578638 | orchestrator | | created | 2025-06-22T20:29:18Z | 2025-06-22 20:33:17.578642 | orchestrator | | description | None | 2025-06-22 20:33:17.578646 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:33:17.578649 | orchestrator | | hostId | 696211a2517e5bbb54b7bf4452e16ae801c4f24deb2e8870689a0176 | 2025-06-22 20:33:17.578658 | orchestrator | | host_status | None | 2025-06-22 20:33:17.578662 | orchestrator | | id | ace60722-23fb-4f99-b6f1-36be2eace746 | 2025-06-22 20:33:17.578665 | orchestrator | | image | Cirros 0.6.2 (8559e0d9-8782-4298-b29b-da8316be5805) | 2025-06-22 20:33:17.578669 | orchestrator | | key_name | test | 2025-06-22 20:33:17.578673 | orchestrator | | locked | False | 2025-06-22 20:33:17.578677 | orchestrator | | locked_reason | None | 2025-06-22 20:33:17.578681 | orchestrator | | name | test-1 | 2025-06-22 20:33:17.578689 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:33:17.578693 | orchestrator | | progress | 0 | 2025-06-22 20:33:17.578697 | orchestrator | | project_id | a6acdb45d80946e888051831eedb7adc | 2025-06-22 20:33:17.578701 | orchestrator | | properties | hostname='test-1' | 2025-06-22 20:33:17.578708 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:33:17.578712 | orchestrator | | | name='ssh' | 2025-06-22 20:33:17.578716 | orchestrator | | server_groups | None | 2025-06-22 20:33:17.578719 | orchestrator | | status | ACTIVE | 2025-06-22 20:33:17.578723 | orchestrator | | tags | test | 2025-06-22 20:33:17.578727 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:33:17.578731 | orchestrator | | updated | 2025-06-22T20:31:55Z | 2025-06-22 20:33:17.578739 | orchestrator | | user_id | 830e73c4cdf34bcc8314a4956c5e7a6b | 2025-06-22 20:33:17.578743 | orchestrator | | volumes_attached | | 2025-06-22 20:33:17.580792 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:17.802675 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-22 20:33:20.896078 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:20.896176 | orchestrator | | Field | Value | 2025-06-22 20:33:20.896190 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:20.896201 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:33:20.896211 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:33:20.896221 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:33:20.896234 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-22 20:33:20.896250 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:33:20.896261 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:33:20.896271 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:33:20.896281 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:33:20.896333 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:33:20.896345 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:33:20.896355 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:33:20.896365 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:33:20.896375 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:33:20.896385 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:33:20.896395 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:33:20.896416 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:30:19.000000 | 2025-06-22 20:33:20.896449 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:33:20.896463 | orchestrator | | accessIPv4 | | 2025-06-22 20:33:20.896473 | orchestrator | | accessIPv6 | | 2025-06-22 20:33:20.896567 | orchestrator | | addresses | auto_allocated_network=10.42.0.20, 192.168.112.187 | 2025-06-22 20:33:20.896590 | orchestrator | | config_drive | | 2025-06-22 20:33:20.896601 | orchestrator | | created | 2025-06-22T20:29:57Z | 2025-06-22 20:33:20.896611 | orchestrator | | description | None | 2025-06-22 20:33:20.896621 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:33:20.896631 | orchestrator | | hostId | 775eb843e8b93b85a057c9513f9d8cb1e7fdbef33fa2e896a69cf94d | 2025-06-22 20:33:20.896641 | orchestrator | | host_status | None | 2025-06-22 20:33:20.896650 | orchestrator | | id | 94464195-04d7-4e2b-91a0-afb05e0ca303 | 2025-06-22 20:33:20.896660 | orchestrator | | image | Cirros 0.6.2 (8559e0d9-8782-4298-b29b-da8316be5805) | 2025-06-22 20:33:20.896670 | orchestrator | | key_name | test | 2025-06-22 20:33:20.896685 | orchestrator | | locked | False | 2025-06-22 20:33:20.896702 | orchestrator | | locked_reason | None | 2025-06-22 20:33:20.896712 | orchestrator | | name | test-2 | 2025-06-22 20:33:20.896728 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:33:20.896738 | orchestrator | | progress | 0 | 2025-06-22 20:33:20.896748 | orchestrator | | project_id | a6acdb45d80946e888051831eedb7adc | 2025-06-22 20:33:20.896758 | orchestrator | | properties | hostname='test-2' | 2025-06-22 20:33:20.896768 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:33:20.896778 | orchestrator | | | name='ssh' | 2025-06-22 20:33:20.896788 | orchestrator | | server_groups | None | 2025-06-22 20:33:20.896797 | orchestrator | | status | ACTIVE | 2025-06-22 20:33:20.896813 | orchestrator | | tags | test | 2025-06-22 20:33:20.896827 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:33:20.896837 | orchestrator | | updated | 2025-06-22T20:32:00Z | 2025-06-22 20:33:20.896852 | orchestrator | | user_id | 830e73c4cdf34bcc8314a4956c5e7a6b | 2025-06-22 20:33:20.896863 | orchestrator | | volumes_attached | | 2025-06-22 20:33:20.900030 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:21.128322 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-22 20:33:24.245820 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:24.245935 | orchestrator | | Field | Value | 2025-06-22 20:33:24.245951 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:24.245964 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:33:24.245976 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:33:24.246011 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:33:24.246079 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-22 20:33:24.246092 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:33:24.246104 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:33:24.246116 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:33:24.246200 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:33:24.246234 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:33:24.246246 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:33:24.246257 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:33:24.246268 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:33:24.246280 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:33:24.246301 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:33:24.246313 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:33:24.246329 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:30:56.000000 | 2025-06-22 20:33:24.246341 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:33:24.246352 | orchestrator | | accessIPv4 | | 2025-06-22 20:33:24.246364 | orchestrator | | accessIPv6 | | 2025-06-22 20:33:24.246377 | orchestrator | | addresses | auto_allocated_network=10.42.0.43, 192.168.112.140 | 2025-06-22 20:33:24.246397 | orchestrator | | config_drive | | 2025-06-22 20:33:24.246410 | orchestrator | | created | 2025-06-22T20:30:40Z | 2025-06-22 20:33:24.246424 | orchestrator | | description | None | 2025-06-22 20:33:24.246444 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:33:24.246456 | orchestrator | | hostId | c2d9a95d5535bfebc904becb342bc3db263d57e63675fdaadc0b56c2 | 2025-06-22 20:33:24.246470 | orchestrator | | host_status | None | 2025-06-22 20:33:24.246483 | orchestrator | | id | 4583a6cf-5c83-4597-a171-f5b267056356 | 2025-06-22 20:33:24.246521 | orchestrator | | image | Cirros 0.6.2 (8559e0d9-8782-4298-b29b-da8316be5805) | 2025-06-22 20:33:24.246534 | orchestrator | | key_name | test | 2025-06-22 20:33:24.246546 | orchestrator | | locked | False | 2025-06-22 20:33:24.246559 | orchestrator | | locked_reason | None | 2025-06-22 20:33:24.246571 | orchestrator | | name | test-3 | 2025-06-22 20:33:24.246591 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:33:24.246605 | orchestrator | | progress | 0 | 2025-06-22 20:33:24.246625 | orchestrator | | project_id | a6acdb45d80946e888051831eedb7adc | 2025-06-22 20:33:24.246637 | orchestrator | | properties | hostname='test-3' | 2025-06-22 20:33:24.246650 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:33:24.246670 | orchestrator | | | name='ssh' | 2025-06-22 20:33:24.246688 | orchestrator | | server_groups | None | 2025-06-22 20:33:24.246701 | orchestrator | | status | ACTIVE | 2025-06-22 20:33:24.246714 | orchestrator | | tags | test | 2025-06-22 20:33:24.246725 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:33:24.246736 | orchestrator | | updated | 2025-06-22T20:32:04Z | 2025-06-22 20:33:24.246752 | orchestrator | | user_id | 830e73c4cdf34bcc8314a4956c5e7a6b | 2025-06-22 20:33:24.246764 | orchestrator | | volumes_attached | | 2025-06-22 20:33:24.249040 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:24.501517 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-22 20:33:27.572418 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:27.572600 | orchestrator | | Field | Value | 2025-06-22 20:33:27.572627 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:27.572640 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:33:27.572668 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:33:27.572679 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:33:27.572705 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-22 20:33:27.572717 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:33:27.572728 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:33:27.572739 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:33:27.572774 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:33:27.572806 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:33:27.572818 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:33:27.572829 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:33:27.572840 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:33:27.572859 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:33:27.572870 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:33:27.572881 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:33:27.572892 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:31:30.000000 | 2025-06-22 20:33:27.572903 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:33:27.572922 | orchestrator | | accessIPv4 | | 2025-06-22 20:33:27.572933 | orchestrator | | accessIPv6 | | 2025-06-22 20:33:27.572944 | orchestrator | | addresses | auto_allocated_network=10.42.0.52, 192.168.112.132 | 2025-06-22 20:33:27.572962 | orchestrator | | config_drive | | 2025-06-22 20:33:27.572974 | orchestrator | | created | 2025-06-22T20:31:13Z | 2025-06-22 20:33:27.572985 | orchestrator | | description | None | 2025-06-22 20:33:27.572996 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:33:27.573012 | orchestrator | | hostId | 696211a2517e5bbb54b7bf4452e16ae801c4f24deb2e8870689a0176 | 2025-06-22 20:33:27.573023 | orchestrator | | host_status | None | 2025-06-22 20:33:27.573034 | orchestrator | | id | 4b631183-c0eb-4650-bacc-842badfc9feb | 2025-06-22 20:33:27.573046 | orchestrator | | image | Cirros 0.6.2 (8559e0d9-8782-4298-b29b-da8316be5805) | 2025-06-22 20:33:27.573064 | orchestrator | | key_name | test | 2025-06-22 20:33:27.573075 | orchestrator | | locked | False | 2025-06-22 20:33:27.573087 | orchestrator | | locked_reason | None | 2025-06-22 20:33:27.573098 | orchestrator | | name | test-4 | 2025-06-22 20:33:27.573115 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:33:27.573126 | orchestrator | | progress | 0 | 2025-06-22 20:33:27.573137 | orchestrator | | project_id | a6acdb45d80946e888051831eedb7adc | 2025-06-22 20:33:27.573149 | orchestrator | | properties | hostname='test-4' | 2025-06-22 20:33:27.573164 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:33:27.573175 | orchestrator | | | name='ssh' | 2025-06-22 20:33:27.573187 | orchestrator | | server_groups | None | 2025-06-22 20:33:27.573204 | orchestrator | | status | ACTIVE | 2025-06-22 20:33:27.573215 | orchestrator | | tags | test | 2025-06-22 20:33:27.573226 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:33:27.573237 | orchestrator | | updated | 2025-06-22T20:32:09Z | 2025-06-22 20:33:27.573253 | orchestrator | | user_id | 830e73c4cdf34bcc8314a4956c5e7a6b | 2025-06-22 20:33:27.573264 | orchestrator | | volumes_attached | | 2025-06-22 20:33:27.576432 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:33:27.797291 | orchestrator | + server_ping 2025-06-22 20:33:27.798371 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-22 20:33:27.798402 | orchestrator | ++ tr -d '\r' 2025-06-22 20:33:30.565621 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:33:30.565732 | orchestrator | + ping -c3 192.168.112.140 2025-06-22 20:33:30.575731 | orchestrator | PING 192.168.112.140 (192.168.112.140) 56(84) bytes of data. 2025-06-22 20:33:30.575803 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=1 ttl=63 time=5.50 ms 2025-06-22 20:33:31.574647 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=2 ttl=63 time=2.40 ms 2025-06-22 20:33:32.576353 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=3 ttl=63 time=2.07 ms 2025-06-22 20:33:32.576467 | orchestrator | 2025-06-22 20:33:32.576484 | orchestrator | --- 192.168.112.140 ping statistics --- 2025-06-22 20:33:32.576547 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:33:32.576560 | orchestrator | rtt min/avg/max/mdev = 2.068/3.321/5.498/1.545 ms 2025-06-22 20:33:32.576735 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:33:32.576776 | orchestrator | + ping -c3 192.168.112.187 2025-06-22 20:33:32.589743 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2025-06-22 20:33:32.589805 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=8.74 ms 2025-06-22 20:33:33.586361 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.96 ms 2025-06-22 20:33:34.587702 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=2.02 ms 2025-06-22 20:33:34.587820 | orchestrator | 2025-06-22 20:33:34.587847 | orchestrator | --- 192.168.112.187 ping statistics --- 2025-06-22 20:33:34.587869 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:33:34.587881 | orchestrator | rtt min/avg/max/mdev = 2.020/4.574/8.742/2.971 ms 2025-06-22 20:33:34.587893 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:33:34.587904 | orchestrator | + ping -c3 192.168.112.139 2025-06-22 20:33:34.600256 | orchestrator | PING 192.168.112.139 (192.168.112.139) 56(84) bytes of data. 2025-06-22 20:33:34.600333 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=1 ttl=63 time=8.27 ms 2025-06-22 20:33:35.595602 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=2 ttl=63 time=2.68 ms 2025-06-22 20:33:36.597277 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=3 ttl=63 time=1.91 ms 2025-06-22 20:33:36.597387 | orchestrator | 2025-06-22 20:33:36.597404 | orchestrator | --- 192.168.112.139 ping statistics --- 2025-06-22 20:33:36.597417 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:33:36.597429 | orchestrator | rtt min/avg/max/mdev = 1.907/4.284/8.271/2.836 ms 2025-06-22 20:33:36.597699 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:33:36.597723 | orchestrator | + ping -c3 192.168.112.132 2025-06-22 20:33:36.607534 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2025-06-22 20:33:36.607586 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=5.70 ms 2025-06-22 20:33:37.606609 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.63 ms 2025-06-22 20:33:38.608888 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.98 ms 2025-06-22 20:33:38.608984 | orchestrator | 2025-06-22 20:33:38.609000 | orchestrator | --- 192.168.112.132 ping statistics --- 2025-06-22 20:33:38.609013 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:33:38.609024 | orchestrator | rtt min/avg/max/mdev = 1.977/3.436/5.701/1.623 ms 2025-06-22 20:33:38.609035 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:33:38.609047 | orchestrator | + ping -c3 192.168.112.181 2025-06-22 20:33:38.620645 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-06-22 20:33:38.620687 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=8.65 ms 2025-06-22 20:33:39.616643 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.41 ms 2025-06-22 20:33:40.618370 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=2.21 ms 2025-06-22 20:33:40.618472 | orchestrator | 2025-06-22 20:33:40.618486 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-06-22 20:33:40.618533 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:33:40.618545 | orchestrator | rtt min/avg/max/mdev = 2.210/4.421/8.648/2.989 ms 2025-06-22 20:33:40.618934 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-22 20:33:40.618958 | orchestrator | + compute_list 2025-06-22 20:33:40.618969 | orchestrator | + osism manage compute list testbed-node-3 2025-06-22 20:33:44.277647 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:33:44.277759 | orchestrator | | ID | Name | Status | 2025-06-22 20:33:44.277776 | orchestrator | |--------------------------------------+--------+----------| 2025-06-22 20:33:44.277788 | orchestrator | | 4583a6cf-5c83-4597-a171-f5b267056356 | test-3 | ACTIVE | 2025-06-22 20:33:44.277799 | orchestrator | | a7660aba-175a-44bf-b22e-4dca3c523cdd | test | ACTIVE | 2025-06-22 20:33:44.277810 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:33:44.580319 | orchestrator | + osism manage compute list testbed-node-4 2025-06-22 20:33:47.814721 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:33:47.814889 | orchestrator | | ID | Name | Status | 2025-06-22 20:33:47.814916 | orchestrator | |--------------------------------------+--------+----------| 2025-06-22 20:33:47.814934 | orchestrator | | 4b631183-c0eb-4650-bacc-842badfc9feb | test-4 | ACTIVE | 2025-06-22 20:33:47.814953 | orchestrator | | ace60722-23fb-4f99-b6f1-36be2eace746 | test-1 | ACTIVE | 2025-06-22 20:33:47.814970 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:33:48.041550 | orchestrator | + osism manage compute list testbed-node-5 2025-06-22 20:33:51.138760 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:33:51.138870 | orchestrator | | ID | Name | Status | 2025-06-22 20:33:51.138885 | orchestrator | |--------------------------------------+--------+----------| 2025-06-22 20:33:51.138897 | orchestrator | | 94464195-04d7-4e2b-91a0-afb05e0ca303 | test-2 | ACTIVE | 2025-06-22 20:33:51.138909 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:33:51.386474 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-06-22 20:33:54.321803 | orchestrator | 2025-06-22 20:33:54 | INFO  | Live migrating server 4b631183-c0eb-4650-bacc-842badfc9feb 2025-06-22 20:34:08.592269 | orchestrator | 2025-06-22 20:34:08 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:34:11.217064 | orchestrator | 2025-06-22 20:34:11 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:34:13.856927 | orchestrator | 2025-06-22 20:34:13 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:34:16.268932 | orchestrator | 2025-06-22 20:34:16 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:34:18.825498 | orchestrator | 2025-06-22 20:34:18 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:34:21.357625 | orchestrator | 2025-06-22 20:34:21 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:34:23.642196 | orchestrator | 2025-06-22 20:34:23 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:34:26.042491 | orchestrator | 2025-06-22 20:34:26 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) completed with status ACTIVE 2025-06-22 20:34:26.042649 | orchestrator | 2025-06-22 20:34:26 | INFO  | Live migrating server ace60722-23fb-4f99-b6f1-36be2eace746 2025-06-22 20:34:38.613301 | orchestrator | 2025-06-22 20:34:38 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:34:40.978693 | orchestrator | 2025-06-22 20:34:40 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:34:43.305980 | orchestrator | 2025-06-22 20:34:43 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:34:45.663236 | orchestrator | 2025-06-22 20:34:45 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:34:47.975583 | orchestrator | 2025-06-22 20:34:47 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:34:50.227441 | orchestrator | 2025-06-22 20:34:50 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:34:52.510598 | orchestrator | 2025-06-22 20:34:52 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:34:54.827887 | orchestrator | 2025-06-22 20:34:54 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) completed with status ACTIVE 2025-06-22 20:34:55.083912 | orchestrator | + compute_list 2025-06-22 20:34:55.083999 | orchestrator | + osism manage compute list testbed-node-3 2025-06-22 20:34:58.160656 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:34:58.160795 | orchestrator | | ID | Name | Status | 2025-06-22 20:34:58.160820 | orchestrator | |--------------------------------------+--------+----------| 2025-06-22 20:34:58.160834 | orchestrator | | 4b631183-c0eb-4650-bacc-842badfc9feb | test-4 | ACTIVE | 2025-06-22 20:34:58.160845 | orchestrator | | 4583a6cf-5c83-4597-a171-f5b267056356 | test-3 | ACTIVE | 2025-06-22 20:34:58.160856 | orchestrator | | ace60722-23fb-4f99-b6f1-36be2eace746 | test-1 | ACTIVE | 2025-06-22 20:34:58.160867 | orchestrator | | a7660aba-175a-44bf-b22e-4dca3c523cdd | test | ACTIVE | 2025-06-22 20:34:58.160878 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:34:58.404230 | orchestrator | + osism manage compute list testbed-node-4 2025-06-22 20:35:01.063166 | orchestrator | +------+--------+----------+ 2025-06-22 20:35:01.063275 | orchestrator | | ID | Name | Status | 2025-06-22 20:35:01.063290 | orchestrator | |------+--------+----------| 2025-06-22 20:35:01.063302 | orchestrator | +------+--------+----------+ 2025-06-22 20:35:01.316620 | orchestrator | + osism manage compute list testbed-node-5 2025-06-22 20:35:04.226000 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:35:04.226175 | orchestrator | | ID | Name | Status | 2025-06-22 20:35:04.226191 | orchestrator | |--------------------------------------+--------+----------| 2025-06-22 20:35:04.226677 | orchestrator | | 94464195-04d7-4e2b-91a0-afb05e0ca303 | test-2 | ACTIVE | 2025-06-22 20:35:04.226703 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:35:04.483840 | orchestrator | + server_ping 2025-06-22 20:35:04.485568 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-22 20:35:04.485623 | orchestrator | ++ tr -d '\r' 2025-06-22 20:35:07.268917 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:35:07.269016 | orchestrator | + ping -c3 192.168.112.140 2025-06-22 20:35:07.283956 | orchestrator | PING 192.168.112.140 (192.168.112.140) 56(84) bytes of data. 2025-06-22 20:35:07.284020 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=1 ttl=63 time=11.7 ms 2025-06-22 20:35:08.277311 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=2 ttl=63 time=2.91 ms 2025-06-22 20:35:09.277372 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=3 ttl=63 time=1.94 ms 2025-06-22 20:35:09.277467 | orchestrator | 2025-06-22 20:35:09.277639 | orchestrator | --- 192.168.112.140 ping statistics --- 2025-06-22 20:35:09.277653 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:35:09.277664 | orchestrator | rtt min/avg/max/mdev = 1.938/5.529/11.741/4.410 ms 2025-06-22 20:35:09.277689 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:35:09.277701 | orchestrator | + ping -c3 192.168.112.187 2025-06-22 20:35:09.291002 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2025-06-22 20:35:09.291053 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=8.00 ms 2025-06-22 20:35:10.285205 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.10 ms 2025-06-22 20:35:11.287019 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=2.03 ms 2025-06-22 20:35:11.287100 | orchestrator | 2025-06-22 20:35:11.287115 | orchestrator | --- 192.168.112.187 ping statistics --- 2025-06-22 20:35:11.287127 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-22 20:35:11.287138 | orchestrator | rtt min/avg/max/mdev = 2.025/4.039/7.995/2.797 ms 2025-06-22 20:35:11.287149 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:35:11.287407 | orchestrator | + ping -c3 192.168.112.139 2025-06-22 20:35:11.300260 | orchestrator | PING 192.168.112.139 (192.168.112.139) 56(84) bytes of data. 2025-06-22 20:35:11.300319 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=1 ttl=63 time=8.71 ms 2025-06-22 20:35:12.296384 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=2 ttl=63 time=2.67 ms 2025-06-22 20:35:13.298203 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=3 ttl=63 time=1.92 ms 2025-06-22 20:35:13.298332 | orchestrator | 2025-06-22 20:35:13.298355 | orchestrator | --- 192.168.112.139 ping statistics --- 2025-06-22 20:35:13.298373 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-22 20:35:13.298390 | orchestrator | rtt min/avg/max/mdev = 1.916/4.430/8.709/3.041 ms 2025-06-22 20:35:13.298408 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:35:13.298426 | orchestrator | + ping -c3 192.168.112.132 2025-06-22 20:35:13.309069 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2025-06-22 20:35:13.309111 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=6.90 ms 2025-06-22 20:35:14.306257 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.88 ms 2025-06-22 20:35:15.306686 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.76 ms 2025-06-22 20:35:15.306829 | orchestrator | 2025-06-22 20:35:15.306858 | orchestrator | --- 192.168.112.132 ping statistics --- 2025-06-22 20:35:15.306879 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-22 20:35:15.306898 | orchestrator | rtt min/avg/max/mdev = 1.764/3.844/6.895/2.204 ms 2025-06-22 20:35:15.307368 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:35:15.307411 | orchestrator | + ping -c3 192.168.112.181 2025-06-22 20:35:15.320568 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-06-22 20:35:15.320640 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=7.51 ms 2025-06-22 20:35:16.317919 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.83 ms 2025-06-22 20:35:17.317661 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=1.77 ms 2025-06-22 20:35:17.317774 | orchestrator | 2025-06-22 20:35:17.317790 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-06-22 20:35:17.317803 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-22 20:35:17.317815 | orchestrator | rtt min/avg/max/mdev = 1.771/4.038/7.512/2.494 ms 2025-06-22 20:35:17.318207 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-06-22 20:35:20.288105 | orchestrator | 2025-06-22 20:35:20 | INFO  | Live migrating server 94464195-04d7-4e2b-91a0-afb05e0ca303 2025-06-22 20:35:33.772294 | orchestrator | 2025-06-22 20:35:33 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:35:36.167461 | orchestrator | 2025-06-22 20:35:36 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:35:38.510313 | orchestrator | 2025-06-22 20:35:38 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:35:40.791740 | orchestrator | 2025-06-22 20:35:40 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:35:43.150884 | orchestrator | 2025-06-22 20:35:43 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:35:45.536909 | orchestrator | 2025-06-22 20:35:45 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:35:47.853142 | orchestrator | 2025-06-22 20:35:47 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:35:50.181059 | orchestrator | 2025-06-22 20:35:50 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) completed with status ACTIVE 2025-06-22 20:35:50.431234 | orchestrator | + compute_list 2025-06-22 20:35:50.431335 | orchestrator | + osism manage compute list testbed-node-3 2025-06-22 20:35:53.615382 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:35:53.615485 | orchestrator | | ID | Name | Status | 2025-06-22 20:35:53.615500 | orchestrator | |--------------------------------------+--------+----------| 2025-06-22 20:35:53.615512 | orchestrator | | 4b631183-c0eb-4650-bacc-842badfc9feb | test-4 | ACTIVE | 2025-06-22 20:35:53.615641 | orchestrator | | 4583a6cf-5c83-4597-a171-f5b267056356 | test-3 | ACTIVE | 2025-06-22 20:35:53.615657 | orchestrator | | 94464195-04d7-4e2b-91a0-afb05e0ca303 | test-2 | ACTIVE | 2025-06-22 20:35:53.615668 | orchestrator | | ace60722-23fb-4f99-b6f1-36be2eace746 | test-1 | ACTIVE | 2025-06-22 20:35:53.615678 | orchestrator | | a7660aba-175a-44bf-b22e-4dca3c523cdd | test | ACTIVE | 2025-06-22 20:35:53.615689 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:35:53.868270 | orchestrator | + osism manage compute list testbed-node-4 2025-06-22 20:35:56.436958 | orchestrator | +------+--------+----------+ 2025-06-22 20:35:56.437076 | orchestrator | | ID | Name | Status | 2025-06-22 20:35:56.437091 | orchestrator | |------+--------+----------| 2025-06-22 20:35:56.437103 | orchestrator | +------+--------+----------+ 2025-06-22 20:35:56.684298 | orchestrator | + osism manage compute list testbed-node-5 2025-06-22 20:35:59.282275 | orchestrator | +------+--------+----------+ 2025-06-22 20:35:59.282381 | orchestrator | | ID | Name | Status | 2025-06-22 20:35:59.282396 | orchestrator | |------+--------+----------| 2025-06-22 20:35:59.282408 | orchestrator | +------+--------+----------+ 2025-06-22 20:35:59.554284 | orchestrator | + server_ping 2025-06-22 20:35:59.555330 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-22 20:35:59.555385 | orchestrator | ++ tr -d '\r' 2025-06-22 20:36:02.350150 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:36:02.350253 | orchestrator | + ping -c3 192.168.112.140 2025-06-22 20:36:02.361175 | orchestrator | PING 192.168.112.140 (192.168.112.140) 56(84) bytes of data. 2025-06-22 20:36:02.361255 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=1 ttl=63 time=9.24 ms 2025-06-22 20:36:03.357992 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=2 ttl=63 time=4.40 ms 2025-06-22 20:36:04.356504 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=3 ttl=63 time=1.93 ms 2025-06-22 20:36:04.356656 | orchestrator | 2025-06-22 20:36:04.356673 | orchestrator | --- 192.168.112.140 ping statistics --- 2025-06-22 20:36:04.356687 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-22 20:36:04.356699 | orchestrator | rtt min/avg/max/mdev = 1.933/5.191/9.243/3.036 ms 2025-06-22 20:36:04.357347 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:36:04.357373 | orchestrator | + ping -c3 192.168.112.187 2025-06-22 20:36:04.369096 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2025-06-22 20:36:04.369157 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=7.05 ms 2025-06-22 20:36:05.366087 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.58 ms 2025-06-22 20:36:06.368231 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=2.30 ms 2025-06-22 20:36:06.368335 | orchestrator | 2025-06-22 20:36:06.368351 | orchestrator | --- 192.168.112.187 ping statistics --- 2025-06-22 20:36:06.368364 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:36:06.368375 | orchestrator | rtt min/avg/max/mdev = 2.302/3.976/7.052/2.177 ms 2025-06-22 20:36:06.368628 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:36:06.368652 | orchestrator | + ping -c3 192.168.112.139 2025-06-22 20:36:06.381015 | orchestrator | PING 192.168.112.139 (192.168.112.139) 56(84) bytes of data. 2025-06-22 20:36:06.381051 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=1 ttl=63 time=8.46 ms 2025-06-22 20:36:07.377364 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=2 ttl=63 time=2.88 ms 2025-06-22 20:36:08.377448 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=3 ttl=63 time=1.70 ms 2025-06-22 20:36:08.377596 | orchestrator | 2025-06-22 20:36:08.377616 | orchestrator | --- 192.168.112.139 ping statistics --- 2025-06-22 20:36:08.377629 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:36:08.377641 | orchestrator | rtt min/avg/max/mdev = 1.695/4.345/8.458/2.948 ms 2025-06-22 20:36:08.377920 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:36:08.377944 | orchestrator | + ping -c3 192.168.112.132 2025-06-22 20:36:08.389524 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2025-06-22 20:36:08.389682 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=7.06 ms 2025-06-22 20:36:09.386459 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.48 ms 2025-06-22 20:36:10.386840 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.50 ms 2025-06-22 20:36:10.386942 | orchestrator | 2025-06-22 20:36:10.386957 | orchestrator | --- 192.168.112.132 ping statistics --- 2025-06-22 20:36:10.386970 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:36:10.386981 | orchestrator | rtt min/avg/max/mdev = 1.501/3.680/7.064/2.425 ms 2025-06-22 20:36:10.387847 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:36:10.387872 | orchestrator | + ping -c3 192.168.112.181 2025-06-22 20:36:10.398205 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-06-22 20:36:10.398229 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=6.67 ms 2025-06-22 20:36:11.397350 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.87 ms 2025-06-22 20:36:12.397292 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=2.23 ms 2025-06-22 20:36:12.397401 | orchestrator | 2025-06-22 20:36:12.397426 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-06-22 20:36:12.397448 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:36:12.397467 | orchestrator | rtt min/avg/max/mdev = 2.228/3.923/6.669/1.959 ms 2025-06-22 20:36:12.397896 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-06-22 20:36:15.603306 | orchestrator | 2025-06-22 20:36:15 | INFO  | Live migrating server 4b631183-c0eb-4650-bacc-842badfc9feb 2025-06-22 20:36:27.886190 | orchestrator | 2025-06-22 20:36:27 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:36:30.278487 | orchestrator | 2025-06-22 20:36:30 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:36:32.660198 | orchestrator | 2025-06-22 20:36:32 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:36:34.925970 | orchestrator | 2025-06-22 20:36:34 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:36:37.245899 | orchestrator | 2025-06-22 20:36:37 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:36:39.521772 | orchestrator | 2025-06-22 20:36:39 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:36:41.893445 | orchestrator | 2025-06-22 20:36:41 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:36:44.249195 | orchestrator | 2025-06-22 20:36:44 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) completed with status ACTIVE 2025-06-22 20:36:44.249291 | orchestrator | 2025-06-22 20:36:44 | INFO  | Live migrating server 4583a6cf-5c83-4597-a171-f5b267056356 2025-06-22 20:36:54.980917 | orchestrator | 2025-06-22 20:36:54 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) is still in progress 2025-06-22 20:36:57.410196 | orchestrator | 2025-06-22 20:36:57 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) is still in progress 2025-06-22 20:36:59.799371 | orchestrator | 2025-06-22 20:36:59 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) is still in progress 2025-06-22 20:37:02.131923 | orchestrator | 2025-06-22 20:37:02 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) is still in progress 2025-06-22 20:37:04.539630 | orchestrator | 2025-06-22 20:37:04 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) is still in progress 2025-06-22 20:37:06.871087 | orchestrator | 2025-06-22 20:37:06 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) is still in progress 2025-06-22 20:37:09.216881 | orchestrator | 2025-06-22 20:37:09 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) completed with status ACTIVE 2025-06-22 20:37:09.216978 | orchestrator | 2025-06-22 20:37:09 | INFO  | Live migrating server 94464195-04d7-4e2b-91a0-afb05e0ca303 2025-06-22 20:37:20.363293 | orchestrator | 2025-06-22 20:37:20 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:37:22.741050 | orchestrator | 2025-06-22 20:37:22 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:37:25.113112 | orchestrator | 2025-06-22 20:37:25 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:37:27.405073 | orchestrator | 2025-06-22 20:37:27 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:37:29.748504 | orchestrator | 2025-06-22 20:37:29 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:37:32.081109 | orchestrator | 2025-06-22 20:37:32 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:37:34.426983 | orchestrator | 2025-06-22 20:37:34 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) completed with status ACTIVE 2025-06-22 20:37:34.427087 | orchestrator | 2025-06-22 20:37:34 | INFO  | Live migrating server ace60722-23fb-4f99-b6f1-36be2eace746 2025-06-22 20:37:44.692930 | orchestrator | 2025-06-22 20:37:44 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:37:47.012920 | orchestrator | 2025-06-22 20:37:47 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:37:49.396839 | orchestrator | 2025-06-22 20:37:49 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:37:51.746394 | orchestrator | 2025-06-22 20:37:51 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:37:54.084179 | orchestrator | 2025-06-22 20:37:54 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:37:56.623579 | orchestrator | 2025-06-22 20:37:56 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:37:58.927595 | orchestrator | 2025-06-22 20:37:58 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:38:01.265305 | orchestrator | 2025-06-22 20:38:01 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) completed with status ACTIVE 2025-06-22 20:38:01.265408 | orchestrator | 2025-06-22 20:38:01 | INFO  | Live migrating server a7660aba-175a-44bf-b22e-4dca3c523cdd 2025-06-22 20:38:11.473062 | orchestrator | 2025-06-22 20:38:11 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:38:13.843001 | orchestrator | 2025-06-22 20:38:13 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:38:16.206847 | orchestrator | 2025-06-22 20:38:16 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:38:18.572328 | orchestrator | 2025-06-22 20:38:18 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:38:20.813393 | orchestrator | 2025-06-22 20:38:20 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:38:23.081185 | orchestrator | 2025-06-22 20:38:23 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:38:25.390889 | orchestrator | 2025-06-22 20:38:25 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:38:27.710122 | orchestrator | 2025-06-22 20:38:27 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:38:29.990480 | orchestrator | 2025-06-22 20:38:29 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:38:32.319277 | orchestrator | 2025-06-22 20:38:32 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) completed with status ACTIVE 2025-06-22 20:38:32.652782 | orchestrator | + compute_list 2025-06-22 20:38:32.652880 | orchestrator | + osism manage compute list testbed-node-3 2025-06-22 20:38:35.365527 | orchestrator | +------+--------+----------+ 2025-06-22 20:38:35.365708 | orchestrator | | ID | Name | Status | 2025-06-22 20:38:35.365728 | orchestrator | |------+--------+----------| 2025-06-22 20:38:35.365740 | orchestrator | +------+--------+----------+ 2025-06-22 20:38:35.630477 | orchestrator | + osism manage compute list testbed-node-4 2025-06-22 20:38:38.860785 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:38:38.860909 | orchestrator | | ID | Name | Status | 2025-06-22 20:38:38.860926 | orchestrator | |--------------------------------------+--------+----------| 2025-06-22 20:38:38.860938 | orchestrator | | 4b631183-c0eb-4650-bacc-842badfc9feb | test-4 | ACTIVE | 2025-06-22 20:38:38.860949 | orchestrator | | 4583a6cf-5c83-4597-a171-f5b267056356 | test-3 | ACTIVE | 2025-06-22 20:38:38.860960 | orchestrator | | 94464195-04d7-4e2b-91a0-afb05e0ca303 | test-2 | ACTIVE | 2025-06-22 20:38:38.860978 | orchestrator | | ace60722-23fb-4f99-b6f1-36be2eace746 | test-1 | ACTIVE | 2025-06-22 20:38:38.860996 | orchestrator | | a7660aba-175a-44bf-b22e-4dca3c523cdd | test | ACTIVE | 2025-06-22 20:38:38.861014 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:38:39.121258 | orchestrator | + osism manage compute list testbed-node-5 2025-06-22 20:38:41.739061 | orchestrator | +------+--------+----------+ 2025-06-22 20:38:41.739189 | orchestrator | | ID | Name | Status | 2025-06-22 20:38:41.739214 | orchestrator | |------+--------+----------| 2025-06-22 20:38:41.739235 | orchestrator | +------+--------+----------+ 2025-06-22 20:38:41.984123 | orchestrator | + server_ping 2025-06-22 20:38:41.984746 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-22 20:38:41.985249 | orchestrator | ++ tr -d '\r' 2025-06-22 20:38:45.094997 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:38:45.095126 | orchestrator | + ping -c3 192.168.112.140 2025-06-22 20:38:45.105754 | orchestrator | PING 192.168.112.140 (192.168.112.140) 56(84) bytes of data. 2025-06-22 20:38:45.105846 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=1 ttl=63 time=8.31 ms 2025-06-22 20:38:46.101875 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=2 ttl=63 time=2.84 ms 2025-06-22 20:38:47.104066 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=3 ttl=63 time=2.29 ms 2025-06-22 20:38:47.104194 | orchestrator | 2025-06-22 20:38:47.104212 | orchestrator | --- 192.168.112.140 ping statistics --- 2025-06-22 20:38:47.104225 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-22 20:38:47.104236 | orchestrator | rtt min/avg/max/mdev = 2.288/4.480/8.314/2.720 ms 2025-06-22 20:38:47.104248 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:38:47.104260 | orchestrator | + ping -c3 192.168.112.187 2025-06-22 20:38:47.117021 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2025-06-22 20:38:47.117060 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=8.57 ms 2025-06-22 20:38:48.113690 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=2.87 ms 2025-06-22 20:38:49.113909 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=2.20 ms 2025-06-22 20:38:49.114141 | orchestrator | 2025-06-22 20:38:49.114161 | orchestrator | --- 192.168.112.187 ping statistics --- 2025-06-22 20:38:49.114169 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:38:49.114177 | orchestrator | rtt min/avg/max/mdev = 2.203/4.549/8.571/2.856 ms 2025-06-22 20:38:49.114195 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:38:49.114203 | orchestrator | + ping -c3 192.168.112.139 2025-06-22 20:38:49.132007 | orchestrator | PING 192.168.112.139 (192.168.112.139) 56(84) bytes of data. 2025-06-22 20:38:49.132046 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=1 ttl=63 time=13.5 ms 2025-06-22 20:38:50.124066 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=2 ttl=63 time=2.02 ms 2025-06-22 20:38:51.125556 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=3 ttl=63 time=2.35 ms 2025-06-22 20:38:51.125688 | orchestrator | 2025-06-22 20:38:51.125703 | orchestrator | --- 192.168.112.139 ping statistics --- 2025-06-22 20:38:51.125712 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:38:51.125720 | orchestrator | rtt min/avg/max/mdev = 2.015/5.951/13.485/5.329 ms 2025-06-22 20:38:51.125728 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:38:51.125736 | orchestrator | + ping -c3 192.168.112.132 2025-06-22 20:38:51.134426 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2025-06-22 20:38:51.134512 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=6.46 ms 2025-06-22 20:38:52.132511 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.55 ms 2025-06-22 20:38:53.134339 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.85 ms 2025-06-22 20:38:53.134437 | orchestrator | 2025-06-22 20:38:53.134453 | orchestrator | --- 192.168.112.132 ping statistics --- 2025-06-22 20:38:53.134466 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-22 20:38:53.134478 | orchestrator | rtt min/avg/max/mdev = 1.851/3.618/6.457/2.027 ms 2025-06-22 20:38:53.134489 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:38:53.134501 | orchestrator | + ping -c3 192.168.112.181 2025-06-22 20:38:53.144429 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-06-22 20:38:53.144481 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=5.80 ms 2025-06-22 20:38:54.143707 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.61 ms 2025-06-22 20:38:55.145340 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=2.26 ms 2025-06-22 20:38:55.145445 | orchestrator | 2025-06-22 20:38:55.145461 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-06-22 20:38:55.145475 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-22 20:38:55.145486 | orchestrator | rtt min/avg/max/mdev = 2.255/3.555/5.803/1.595 ms 2025-06-22 20:38:55.145908 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-06-22 20:38:58.418456 | orchestrator | 2025-06-22 20:38:58 | INFO  | Live migrating server 4b631183-c0eb-4650-bacc-842badfc9feb 2025-06-22 20:39:09.262223 | orchestrator | 2025-06-22 20:39:09 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:39:11.649106 | orchestrator | 2025-06-22 20:39:11 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:39:14.153410 | orchestrator | 2025-06-22 20:39:14 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:39:16.429173 | orchestrator | 2025-06-22 20:39:16 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:39:18.731119 | orchestrator | 2025-06-22 20:39:18 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:39:21.023442 | orchestrator | 2025-06-22 20:39:21 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) is still in progress 2025-06-22 20:39:23.459505 | orchestrator | 2025-06-22 20:39:23 | INFO  | Live migration of 4b631183-c0eb-4650-bacc-842badfc9feb (test-4) completed with status ACTIVE 2025-06-22 20:39:23.459614 | orchestrator | 2025-06-22 20:39:23 | INFO  | Live migrating server 4583a6cf-5c83-4597-a171-f5b267056356 2025-06-22 20:39:33.787022 | orchestrator | 2025-06-22 20:39:33 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) is still in progress 2025-06-22 20:39:36.110965 | orchestrator | 2025-06-22 20:39:36 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) is still in progress 2025-06-22 20:39:38.632701 | orchestrator | 2025-06-22 20:39:38 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) is still in progress 2025-06-22 20:39:40.916369 | orchestrator | 2025-06-22 20:39:40 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) is still in progress 2025-06-22 20:39:43.187185 | orchestrator | 2025-06-22 20:39:43 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) is still in progress 2025-06-22 20:39:45.658755 | orchestrator | 2025-06-22 20:39:45 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) is still in progress 2025-06-22 20:39:48.023103 | orchestrator | 2025-06-22 20:39:48 | INFO  | Live migration of 4583a6cf-5c83-4597-a171-f5b267056356 (test-3) completed with status ACTIVE 2025-06-22 20:39:48.023214 | orchestrator | 2025-06-22 20:39:48 | INFO  | Live migrating server 94464195-04d7-4e2b-91a0-afb05e0ca303 2025-06-22 20:39:58.178299 | orchestrator | 2025-06-22 20:39:58 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:40:00.586466 | orchestrator | 2025-06-22 20:40:00 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:40:02.857216 | orchestrator | 2025-06-22 20:40:02 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:40:05.177836 | orchestrator | 2025-06-22 20:40:05 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:40:07.514336 | orchestrator | 2025-06-22 20:40:07 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:40:09.798573 | orchestrator | 2025-06-22 20:40:09 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:40:12.168811 | orchestrator | 2025-06-22 20:40:12 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) is still in progress 2025-06-22 20:40:14.462165 | orchestrator | 2025-06-22 20:40:14 | INFO  | Live migration of 94464195-04d7-4e2b-91a0-afb05e0ca303 (test-2) completed with status ACTIVE 2025-06-22 20:40:14.462255 | orchestrator | 2025-06-22 20:40:14 | INFO  | Live migrating server ace60722-23fb-4f99-b6f1-36be2eace746 2025-06-22 20:40:24.713776 | orchestrator | 2025-06-22 20:40:24 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:40:27.218312 | orchestrator | 2025-06-22 20:40:27 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:40:29.553998 | orchestrator | 2025-06-22 20:40:29 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:40:31.867190 | orchestrator | 2025-06-22 20:40:31 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:40:34.241163 | orchestrator | 2025-06-22 20:40:34 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:40:36.541793 | orchestrator | 2025-06-22 20:40:36 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:40:38.880267 | orchestrator | 2025-06-22 20:40:38 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:40:41.175360 | orchestrator | 2025-06-22 20:40:41 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) is still in progress 2025-06-22 20:40:43.533391 | orchestrator | 2025-06-22 20:40:43 | INFO  | Live migration of ace60722-23fb-4f99-b6f1-36be2eace746 (test-1) completed with status ACTIVE 2025-06-22 20:40:43.533496 | orchestrator | 2025-06-22 20:40:43 | INFO  | Live migrating server a7660aba-175a-44bf-b22e-4dca3c523cdd 2025-06-22 20:40:53.668208 | orchestrator | 2025-06-22 20:40:53 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:40:56.088347 | orchestrator | 2025-06-22 20:40:56 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:40:58.430541 | orchestrator | 2025-06-22 20:40:58 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:41:00.765898 | orchestrator | 2025-06-22 20:41:00 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:41:03.037794 | orchestrator | 2025-06-22 20:41:03 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:41:05.326143 | orchestrator | 2025-06-22 20:41:05 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:41:07.696515 | orchestrator | 2025-06-22 20:41:07 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:41:09.994560 | orchestrator | 2025-06-22 20:41:09 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:41:12.272113 | orchestrator | 2025-06-22 20:41:12 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:41:14.644320 | orchestrator | 2025-06-22 20:41:14 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) is still in progress 2025-06-22 20:41:16.957110 | orchestrator | 2025-06-22 20:41:16 | INFO  | Live migration of a7660aba-175a-44bf-b22e-4dca3c523cdd (test) completed with status ACTIVE 2025-06-22 20:41:17.217109 | orchestrator | + compute_list 2025-06-22 20:41:17.217204 | orchestrator | + osism manage compute list testbed-node-3 2025-06-22 20:41:19.841358 | orchestrator | +------+--------+----------+ 2025-06-22 20:41:19.841462 | orchestrator | | ID | Name | Status | 2025-06-22 20:41:19.841477 | orchestrator | |------+--------+----------| 2025-06-22 20:41:19.841488 | orchestrator | +------+--------+----------+ 2025-06-22 20:41:20.100473 | orchestrator | + osism manage compute list testbed-node-4 2025-06-22 20:41:22.778263 | orchestrator | +------+--------+----------+ 2025-06-22 20:41:22.778380 | orchestrator | | ID | Name | Status | 2025-06-22 20:41:22.778396 | orchestrator | |------+--------+----------| 2025-06-22 20:41:22.778408 | orchestrator | +------+--------+----------+ 2025-06-22 20:41:23.136120 | orchestrator | + osism manage compute list testbed-node-5 2025-06-22 20:41:26.199922 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:41:26.200037 | orchestrator | | ID | Name | Status | 2025-06-22 20:41:26.200053 | orchestrator | |--------------------------------------+--------+----------| 2025-06-22 20:41:26.200064 | orchestrator | | 4b631183-c0eb-4650-bacc-842badfc9feb | test-4 | ACTIVE | 2025-06-22 20:41:26.200075 | orchestrator | | 4583a6cf-5c83-4597-a171-f5b267056356 | test-3 | ACTIVE | 2025-06-22 20:41:26.200086 | orchestrator | | 94464195-04d7-4e2b-91a0-afb05e0ca303 | test-2 | ACTIVE | 2025-06-22 20:41:26.200097 | orchestrator | | ace60722-23fb-4f99-b6f1-36be2eace746 | test-1 | ACTIVE | 2025-06-22 20:41:26.200135 | orchestrator | | a7660aba-175a-44bf-b22e-4dca3c523cdd | test | ACTIVE | 2025-06-22 20:41:26.200147 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-22 20:41:26.468090 | orchestrator | + server_ping 2025-06-22 20:41:26.469539 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-22 20:41:26.469579 | orchestrator | ++ tr -d '\r' 2025-06-22 20:41:29.244059 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:41:29.244160 | orchestrator | + ping -c3 192.168.112.140 2025-06-22 20:41:29.260442 | orchestrator | PING 192.168.112.140 (192.168.112.140) 56(84) bytes of data. 2025-06-22 20:41:29.260482 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=1 ttl=63 time=13.3 ms 2025-06-22 20:41:30.251536 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=2 ttl=63 time=2.38 ms 2025-06-22 20:41:31.253161 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=3 ttl=63 time=1.91 ms 2025-06-22 20:41:31.253269 | orchestrator | 2025-06-22 20:41:31.253284 | orchestrator | --- 192.168.112.140 ping statistics --- 2025-06-22 20:41:31.253298 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:41:31.253309 | orchestrator | rtt min/avg/max/mdev = 1.907/5.852/13.276/5.252 ms 2025-06-22 20:41:31.253676 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:41:31.253703 | orchestrator | + ping -c3 192.168.112.187 2025-06-22 20:41:31.266286 | orchestrator | PING 192.168.112.187 (192.168.112.187) 56(84) bytes of data. 2025-06-22 20:41:31.266372 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=1 ttl=63 time=8.46 ms 2025-06-22 20:41:32.263108 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=2 ttl=63 time=3.55 ms 2025-06-22 20:41:33.263891 | orchestrator | 64 bytes from 192.168.112.187: icmp_seq=3 ttl=63 time=1.87 ms 2025-06-22 20:41:33.263989 | orchestrator | 2025-06-22 20:41:33.264005 | orchestrator | --- 192.168.112.187 ping statistics --- 2025-06-22 20:41:33.264018 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:41:33.264029 | orchestrator | rtt min/avg/max/mdev = 1.866/4.622/8.456/2.796 ms 2025-06-22 20:41:33.264041 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:41:33.264054 | orchestrator | + ping -c3 192.168.112.139 2025-06-22 20:41:33.277954 | orchestrator | PING 192.168.112.139 (192.168.112.139) 56(84) bytes of data. 2025-06-22 20:41:33.277997 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=1 ttl=63 time=9.71 ms 2025-06-22 20:41:34.273152 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=2 ttl=63 time=2.92 ms 2025-06-22 20:41:35.274104 | orchestrator | 64 bytes from 192.168.112.139: icmp_seq=3 ttl=63 time=2.81 ms 2025-06-22 20:41:35.274259 | orchestrator | 2025-06-22 20:41:35.274277 | orchestrator | --- 192.168.112.139 ping statistics --- 2025-06-22 20:41:35.274291 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:41:35.274302 | orchestrator | rtt min/avg/max/mdev = 2.809/5.144/9.707/3.226 ms 2025-06-22 20:41:35.274405 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:41:35.274422 | orchestrator | + ping -c3 192.168.112.132 2025-06-22 20:41:35.284654 | orchestrator | PING 192.168.112.132 (192.168.112.132) 56(84) bytes of data. 2025-06-22 20:41:35.284700 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=1 ttl=63 time=6.57 ms 2025-06-22 20:41:36.283029 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=2 ttl=63 time=2.69 ms 2025-06-22 20:41:37.284008 | orchestrator | 64 bytes from 192.168.112.132: icmp_seq=3 ttl=63 time=1.98 ms 2025-06-22 20:41:37.284121 | orchestrator | 2025-06-22 20:41:37.284137 | orchestrator | --- 192.168.112.132 ping statistics --- 2025-06-22 20:41:37.284150 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-22 20:41:37.284161 | orchestrator | rtt min/avg/max/mdev = 1.982/3.745/6.569/2.017 ms 2025-06-22 20:41:37.284438 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:41:37.284461 | orchestrator | + ping -c3 192.168.112.181 2025-06-22 20:41:37.296809 | orchestrator | PING 192.168.112.181 (192.168.112.181) 56(84) bytes of data. 2025-06-22 20:41:37.296870 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=1 ttl=63 time=7.66 ms 2025-06-22 20:41:38.293799 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=2 ttl=63 time=2.30 ms 2025-06-22 20:41:39.295172 | orchestrator | 64 bytes from 192.168.112.181: icmp_seq=3 ttl=63 time=2.01 ms 2025-06-22 20:41:39.295302 | orchestrator | 2025-06-22 20:41:39.295319 | orchestrator | --- 192.168.112.181 ping statistics --- 2025-06-22 20:41:39.295332 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-22 20:41:39.295343 | orchestrator | rtt min/avg/max/mdev = 2.010/3.990/7.659/2.596 ms 2025-06-22 20:41:39.396402 | orchestrator | ok: Runtime: 0:18:15.527365 2025-06-22 20:41:39.441480 | 2025-06-22 20:41:39.441662 | TASK [Run tempest] 2025-06-22 20:41:39.994906 | orchestrator | skipping: Conditional result was False 2025-06-22 20:41:40.012104 | 2025-06-22 20:41:40.012301 | TASK [Check prometheus alert status] 2025-06-22 20:41:40.553702 | orchestrator | skipping: Conditional result was False 2025-06-22 20:41:40.556349 | 2025-06-22 20:41:40.556513 | PLAY RECAP 2025-06-22 20:41:40.556695 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-22 20:41:40.556764 | 2025-06-22 20:41:40.780873 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-22 20:41:40.783483 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-22 20:41:41.546987 | 2025-06-22 20:41:41.547160 | PLAY [Post output play] 2025-06-22 20:41:41.566236 | 2025-06-22 20:41:41.566413 | LOOP [stage-output : Register sources] 2025-06-22 20:41:41.639378 | 2025-06-22 20:41:41.639756 | TASK [stage-output : Check sudo] 2025-06-22 20:41:42.437789 | orchestrator | sudo: a password is required 2025-06-22 20:41:42.681699 | orchestrator | ok: Runtime: 0:00:00.014416 2025-06-22 20:41:42.698110 | 2025-06-22 20:41:42.698321 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-22 20:41:42.741003 | 2025-06-22 20:41:42.741291 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-22 20:41:42.822252 | orchestrator | ok 2025-06-22 20:41:42.830978 | 2025-06-22 20:41:42.831120 | LOOP [stage-output : Ensure target folders exist] 2025-06-22 20:41:43.276160 | orchestrator | ok: "docs" 2025-06-22 20:41:43.276488 | 2025-06-22 20:41:43.508599 | orchestrator | ok: "artifacts" 2025-06-22 20:41:43.737025 | orchestrator | ok: "logs" 2025-06-22 20:41:43.754351 | 2025-06-22 20:41:43.754513 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-22 20:41:43.794576 | 2025-06-22 20:41:43.794983 | TASK [stage-output : Make all log files readable] 2025-06-22 20:41:44.078553 | orchestrator | ok 2025-06-22 20:41:44.087901 | 2025-06-22 20:41:44.088039 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-22 20:41:44.122523 | orchestrator | skipping: Conditional result was False 2025-06-22 20:41:44.137420 | 2025-06-22 20:41:44.137586 | TASK [stage-output : Discover log files for compression] 2025-06-22 20:41:44.161854 | orchestrator | skipping: Conditional result was False 2025-06-22 20:41:44.173759 | 2025-06-22 20:41:44.174054 | LOOP [stage-output : Archive everything from logs] 2025-06-22 20:41:44.220695 | 2025-06-22 20:41:44.220890 | PLAY [Post cleanup play] 2025-06-22 20:41:44.231292 | 2025-06-22 20:41:44.231413 | TASK [Set cloud fact (Zuul deployment)] 2025-06-22 20:41:44.291339 | orchestrator | ok 2025-06-22 20:41:44.303021 | 2025-06-22 20:41:44.303152 | TASK [Set cloud fact (local deployment)] 2025-06-22 20:41:44.328678 | orchestrator | skipping: Conditional result was False 2025-06-22 20:41:44.341697 | 2025-06-22 20:41:44.341846 | TASK [Clean the cloud environment] 2025-06-22 20:41:46.770795 | orchestrator | 2025-06-22 20:41:46 - clean up servers 2025-06-22 20:41:47.549956 | orchestrator | 2025-06-22 20:41:47 - testbed-manager 2025-06-22 20:41:47.639590 | orchestrator | 2025-06-22 20:41:47 - testbed-node-4 2025-06-22 20:41:47.731324 | orchestrator | 2025-06-22 20:41:47 - testbed-node-1 2025-06-22 20:41:47.822231 | orchestrator | 2025-06-22 20:41:47 - testbed-node-5 2025-06-22 20:41:47.919554 | orchestrator | 2025-06-22 20:41:47 - testbed-node-2 2025-06-22 20:41:48.016736 | orchestrator | 2025-06-22 20:41:48 - testbed-node-0 2025-06-22 20:41:48.113945 | orchestrator | 2025-06-22 20:41:48 - testbed-node-3 2025-06-22 20:41:48.204525 | orchestrator | 2025-06-22 20:41:48 - clean up keypairs 2025-06-22 20:41:48.226704 | orchestrator | 2025-06-22 20:41:48 - testbed 2025-06-22 20:41:48.252240 | orchestrator | 2025-06-22 20:41:48 - wait for servers to be gone 2025-06-22 20:41:56.903195 | orchestrator | 2025-06-22 20:41:56 - clean up ports 2025-06-22 20:41:57.079897 | orchestrator | 2025-06-22 20:41:57 - 0a8eca33-b257-4cde-8abf-1cdbe1f815de 2025-06-22 20:41:57.754740 | orchestrator | 2025-06-22 20:41:57 - 1bbe0086-d5cf-4f82-a7a1-1576dd1261bb 2025-06-22 20:41:57.998923 | orchestrator | 2025-06-22 20:41:57 - 8a0b7986-218b-4e84-96f5-1cbe9de44500 2025-06-22 20:41:58.207638 | orchestrator | 2025-06-22 20:41:58 - a9934e36-751f-4e4e-bb92-45fb660ceb9c 2025-06-22 20:41:58.409164 | orchestrator | 2025-06-22 20:41:58 - aa24d7a4-5e5d-42e9-8464-07c763efe5f5 2025-06-22 20:41:58.629519 | orchestrator | 2025-06-22 20:41:58 - dca0d463-4f09-420a-a048-12573b0e3ea2 2025-06-22 20:41:58.833584 | orchestrator | 2025-06-22 20:41:58 - e8acfea4-3197-442b-bb79-c2e5935156aa 2025-06-22 20:41:59.211053 | orchestrator | 2025-06-22 20:41:59 - clean up volumes 2025-06-22 20:41:59.326329 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-5-node-base 2025-06-22 20:41:59.365566 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-manager-base 2025-06-22 20:41:59.405370 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-4-node-base 2025-06-22 20:41:59.446138 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-3-node-base 2025-06-22 20:41:59.490631 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-2-node-base 2025-06-22 20:41:59.529713 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-1-node-base 2025-06-22 20:41:59.573207 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-0-node-base 2025-06-22 20:41:59.615820 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-0-node-3 2025-06-22 20:41:59.655729 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-2-node-5 2025-06-22 20:41:59.697642 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-6-node-3 2025-06-22 20:41:59.737864 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-1-node-4 2025-06-22 20:41:59.806961 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-5-node-5 2025-06-22 20:41:59.847760 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-4-node-4 2025-06-22 20:41:59.894901 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-8-node-5 2025-06-22 20:41:59.936678 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-3-node-3 2025-06-22 20:41:59.978338 | orchestrator | 2025-06-22 20:41:59 - testbed-volume-7-node-4 2025-06-22 20:42:00.019270 | orchestrator | 2025-06-22 20:42:00 - disconnect routers 2025-06-22 20:42:00.137793 | orchestrator | 2025-06-22 20:42:00 - testbed 2025-06-22 20:42:01.146143 | orchestrator | 2025-06-22 20:42:01 - clean up subnets 2025-06-22 20:42:01.189806 | orchestrator | 2025-06-22 20:42:01 - subnet-testbed-management 2025-06-22 20:42:01.357728 | orchestrator | 2025-06-22 20:42:01 - clean up networks 2025-06-22 20:42:01.547424 | orchestrator | 2025-06-22 20:42:01 - net-testbed-management 2025-06-22 20:42:01.859662 | orchestrator | 2025-06-22 20:42:01 - clean up security groups 2025-06-22 20:42:01.897093 | orchestrator | 2025-06-22 20:42:01 - testbed-node 2025-06-22 20:42:02.010928 | orchestrator | 2025-06-22 20:42:02 - testbed-management 2025-06-22 20:42:02.144129 | orchestrator | 2025-06-22 20:42:02 - clean up floating ips 2025-06-22 20:42:02.179943 | orchestrator | 2025-06-22 20:42:02 - 81.163.193.19 2025-06-22 20:42:02.531756 | orchestrator | 2025-06-22 20:42:02 - clean up routers 2025-06-22 20:42:02.648647 | orchestrator | 2025-06-22 20:42:02 - testbed 2025-06-22 20:42:04.402365 | orchestrator | ok: Runtime: 0:00:19.298181 2025-06-22 20:42:04.407089 | 2025-06-22 20:42:04.407296 | PLAY RECAP 2025-06-22 20:42:04.407496 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-22 20:42:04.407603 | 2025-06-22 20:42:04.546034 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-22 20:42:04.548554 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-22 20:42:05.334467 | 2025-06-22 20:42:05.334655 | PLAY [Cleanup play] 2025-06-22 20:42:05.350794 | 2025-06-22 20:42:05.350949 | TASK [Set cloud fact (Zuul deployment)] 2025-06-22 20:42:05.413909 | orchestrator | ok 2025-06-22 20:42:05.423298 | 2025-06-22 20:42:05.423461 | TASK [Set cloud fact (local deployment)] 2025-06-22 20:42:05.449805 | orchestrator | skipping: Conditional result was False 2025-06-22 20:42:05.470123 | 2025-06-22 20:42:05.470287 | TASK [Clean the cloud environment] 2025-06-22 20:42:06.632661 | orchestrator | 2025-06-22 20:42:06 - clean up servers 2025-06-22 20:42:07.126074 | orchestrator | 2025-06-22 20:42:07 - clean up keypairs 2025-06-22 20:42:07.145044 | orchestrator | 2025-06-22 20:42:07 - wait for servers to be gone 2025-06-22 20:42:07.191044 | orchestrator | 2025-06-22 20:42:07 - clean up ports 2025-06-22 20:42:07.265196 | orchestrator | 2025-06-22 20:42:07 - clean up volumes 2025-06-22 20:42:07.325424 | orchestrator | 2025-06-22 20:42:07 - disconnect routers 2025-06-22 20:42:07.345564 | orchestrator | 2025-06-22 20:42:07 - clean up subnets 2025-06-22 20:42:07.363042 | orchestrator | 2025-06-22 20:42:07 - clean up networks 2025-06-22 20:42:07.513168 | orchestrator | 2025-06-22 20:42:07 - clean up security groups 2025-06-22 20:42:07.549343 | orchestrator | 2025-06-22 20:42:07 - clean up floating ips 2025-06-22 20:42:07.573336 | orchestrator | 2025-06-22 20:42:07 - clean up routers 2025-06-22 20:42:08.007930 | orchestrator | ok: Runtime: 0:00:01.343521 2025-06-22 20:42:08.011574 | 2025-06-22 20:42:08.011751 | PLAY RECAP 2025-06-22 20:42:08.011870 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-22 20:42:08.011931 | 2025-06-22 20:42:08.142294 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-22 20:42:08.144688 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-22 20:42:08.897116 | 2025-06-22 20:42:08.897284 | PLAY [Base post-fetch] 2025-06-22 20:42:08.913201 | 2025-06-22 20:42:08.913343 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-22 20:42:08.968833 | orchestrator | skipping: Conditional result was False 2025-06-22 20:42:08.985102 | 2025-06-22 20:42:08.985344 | TASK [fetch-output : Set log path for single node] 2025-06-22 20:42:09.034639 | orchestrator | ok 2025-06-22 20:42:09.043713 | 2025-06-22 20:42:09.043865 | LOOP [fetch-output : Ensure local output dirs] 2025-06-22 20:42:09.583779 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/26d6f32dffdd486a882d9dd5a6805904/work/logs" 2025-06-22 20:42:09.854955 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/26d6f32dffdd486a882d9dd5a6805904/work/artifacts" 2025-06-22 20:42:10.121669 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/26d6f32dffdd486a882d9dd5a6805904/work/docs" 2025-06-22 20:42:10.148570 | 2025-06-22 20:42:10.148763 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-22 20:42:11.098139 | orchestrator | changed: .d..t...... ./ 2025-06-22 20:42:11.098493 | orchestrator | changed: All items complete 2025-06-22 20:42:11.098553 | 2025-06-22 20:42:11.842111 | orchestrator | changed: .d..t...... ./ 2025-06-22 20:42:12.610239 | orchestrator | changed: .d..t...... ./ 2025-06-22 20:42:12.644509 | 2025-06-22 20:42:12.644698 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-22 20:42:12.682976 | orchestrator | skipping: Conditional result was False 2025-06-22 20:42:12.685983 | orchestrator | skipping: Conditional result was False 2025-06-22 20:42:12.710536 | 2025-06-22 20:42:12.710687 | PLAY RECAP 2025-06-22 20:42:12.710769 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-22 20:42:12.710812 | 2025-06-22 20:42:12.840925 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-22 20:42:12.843828 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-22 20:42:13.619863 | 2025-06-22 20:42:13.620038 | PLAY [Base post] 2025-06-22 20:42:13.634896 | 2025-06-22 20:42:13.635043 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-22 20:42:14.640145 | orchestrator | changed 2025-06-22 20:42:14.650288 | 2025-06-22 20:42:14.650422 | PLAY RECAP 2025-06-22 20:42:14.650494 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-22 20:42:14.650561 | 2025-06-22 20:42:14.767991 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-22 20:42:14.770303 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-22 20:42:15.605602 | 2025-06-22 20:42:15.605821 | PLAY [Base post-logs] 2025-06-22 20:42:15.616576 | 2025-06-22 20:42:15.616739 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-22 20:42:16.084381 | localhost | changed 2025-06-22 20:42:16.100339 | 2025-06-22 20:42:16.100524 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-22 20:42:16.139249 | localhost | ok 2025-06-22 20:42:16.146579 | 2025-06-22 20:42:16.146790 | TASK [Set zuul-log-path fact] 2025-06-22 20:42:16.164908 | localhost | ok 2025-06-22 20:42:16.178999 | 2025-06-22 20:42:16.179153 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-22 20:42:16.219110 | localhost | ok 2025-06-22 20:42:16.225704 | 2025-06-22 20:42:16.225872 | TASK [upload-logs : Create log directories] 2025-06-22 20:42:16.762407 | localhost | changed 2025-06-22 20:42:16.765316 | 2025-06-22 20:42:16.765458 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-22 20:42:17.270939 | localhost -> localhost | ok: Runtime: 0:00:00.005499 2025-06-22 20:42:17.279324 | 2025-06-22 20:42:17.279506 | TASK [upload-logs : Upload logs to log server] 2025-06-22 20:42:17.857227 | localhost | Output suppressed because no_log was given 2025-06-22 20:42:17.860749 | 2025-06-22 20:42:17.860933 | LOOP [upload-logs : Compress console log and json output] 2025-06-22 20:42:17.922727 | localhost | skipping: Conditional result was False 2025-06-22 20:42:17.927484 | localhost | skipping: Conditional result was False 2025-06-22 20:42:17.941204 | 2025-06-22 20:42:17.941407 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-22 20:42:17.986416 | localhost | skipping: Conditional result was False 2025-06-22 20:42:17.987199 | 2025-06-22 20:42:17.990580 | localhost | skipping: Conditional result was False 2025-06-22 20:42:18.005097 | 2025-06-22 20:42:18.005356 | LOOP [upload-logs : Upload console log and json output]